var/home/core/zuul-output/0000755000175000017500000000000015134114312014520 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134125413015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000125344115134125354020263 0ustar corecorepikubelet.log]mo۸ȗ2rJQlt:ml};(DۼEUd$;N3`oƖy#?X1򽑛yQ*wwhXD1m*ucx<e 8$.ϗ&^8*TD?Egq qlϓN9˳9(3,Y|_9ca l*#ki*ΖRVBfWȳV)bA6d)QsqT+_0EZ0#4կ|꼘2 J﮸9؍0y9`'y.o@2'.;0_߲e#3WU;ZMvJjm7V6~%sޡDZk y+ ? 4G;U|>o9AƞG6+.-ϴ {]PThؘn޿[܄M')ʙ&1u{Okm Lt: 1B5vst\3`Ƀ=8>1b"y/(%KQ B:-'+T8*뙯Ub5/nb8#'Z{Ӧ'/O:ug.OL8].WX@eίZ?6 @{ wmswR%a 3(,;ϠoPdX~ 63QwS7r8]ڎG ʋ&WS5Gέ(M0ȷ _Blۼ\?BktW"^ ªh\zCkKg<m J9]$ pHHT3ftgEY&r늕l*rxSؓ4A6b?}$^pEZݕ{82dW(B]n=&WkN-PX/N.&֠N0'fE&_½9z8 "pT( 0KP_o8|?t㱿;cRWl2ȵ粶CN};+&Rammh6nr`IIWsѹz,lX񜳚q|tԥ^c8JB@ПYw?oNg/"NT6慞dDFD;.6!뇼k_ZI|Lh_OtI3bmۤ5&LGi jL1_? x~vRdnNs& g*}~W-(b{WdBzDbo`%w'+ֹNP.qQT N[?D+n% K vI ں̵{ūpr' y6EIH5O*hց;Tc¯ *Iqq71T\ g j3? Ё$y^ A$"$v_@%@HT#!  Q&!"#yb@) O'ɐHCIo.K"MHspxXHp h0{ y%/xN!Q ͧ $aJX phQ\<@ NT:P|d:w8b <`$J܁\T¡gG:! N2tHAĭ-NdhGB|w )@A$>OP":xH0jغ Hƹ !ĹG4c='ȕa<a%A,܉B0p4;Q.4Ef\װHU&BwKk@둈lں7.iCN.Bx.xTzJ,.`A?'eBҭqL]?b"+g?䠮Z s9"kLA*^z2a\O!280S|J%ҭ6ϐl JU`Ū\LוskæHZgsJ8^ΐ3ҝ@+Զ+P|`[9|2 a3l_8l2g=Xd-Ƣmkn''+ZCE;%ii?8ȡ~d :HĈԗZh]e+Qp_,pOP.ΒݶE8}*n /pB>vn0->L+!ۿƵLqȮ]|֏6]85pc5fN-i=jN?4|ވZhY#whKt64y|!ITTGgrjMyS]8BGډDN?Z&;QaA9 0* /49hۺѯ)VʬD7 J+Ӧ-8VBcCZj eR:( ޣXc,~eơݪ'a1]$v^޿ȦF6u]}>ߎm5 w-'*VԢ{o;3Ԯe±m69{}3Hd' >䳥YIJ6gm)y!+;d'Yj׸VvwdDV& +ݷ8r FNrt8 aF&6Tc:mxuj)."'h)Llj~͹zXJo0זNbV"#G,ZY:SQd1<Ϗ(wiZC |`^XQ` Y+m;fFifR3,0m;i㈰|~c=-8#x#Sg! G-`.dJ9#So˛.7\8ذt6"EbRزH#듎+Atm l|TôWr̐y0#kEX5>2y$36cnDF-)㭃6Q(B9y4qc#kq[y&9M -!Iհۺ@T#AT@Il] ''\x-}'c3~wGdm̽z 6ChYVZ.߿FvfӦz&ZRivBY؈CSܒbk4Ub&6i!RXAF|3,ymOAfPX8 FH^m%tRVwŌ/1cB : pTdB"&}͠xtoCj/c'c9ڦ4g:/^T2ۖNN-랏VDZ՚ؤVbb7)3funŦKNu@e9$ 3*LE[ |m];o:i±0UWU/4c ]ߍR? HzG))xgq3/ÄGnSIЭeBd(. Ŧ޿ͬB[qHMh#ӨURT}p}@m؈Mzƛ+*k[ΦՙC,Q R=Y- `.x]`G'[]IGɎ+n\HpGcKL`ki`q)h8WQѽO&Oy4w=V=[O]nhޔY5,RnT]S|9*}~k1Z=hglܷ5OS,׊~79nOş&m|tg.{3BModu LS3p09l!P: U X$t$ZߥaG˜ IږGγiڷ鈘`u$Dac.Be`=^ L"}##&DqX R9ѣL }8 K/pIɯ;m!LMfD &/N݈V3B&KBCºrV]Jt ѡ5k~gWMJ2;hBtN}_Rgmܒ(5HtZXK 1KLU{S71Gԇݯ["jOn2 `C*Z|˖+ԭruxuep#:E]. D;,MԍՃ:0AP>ЀP :zW 3У/W )JV?¸ v r "C)C\vaT˗q)W \!ڣ^vhޞeQ) DbZ$yBSzʙvlpHr fnSAnOꂯ}]}O]#\Lٔ]LP4bjzq>oFϦ8VjS9:4dX6QZJ0h թJb1 y{1́s{g"&!S$ vU8Z ֪>b4B}4%+:HZp;jψ^(k W^twoNdF|-S-&7_^yweOF w?m#-vZC2=;{|~;q?=(_wp D/ޠ#,~?xɀl)voJGhx]FѩQ(^GcBU] g6Q@vO߁p=ʹ D|g+/3 zy8%2H(]FNї ]HM=`Ly7BAcgrOT( NY^ H\ L׏@Ѱ;l['k@4%Oìtŝm/+ }q]F>.6͟A~E'ngrpxu~x>ԯ.G{OrOGa. vG^tћW^ъKh,ڞgE |Z*8j$: h[p!{:OFFN秈~"DG1Vn;+8P8,Aҗ7]2g'#Fʏca r4h+wQ\ p6;´.@4\OnA~2t !DIL{" EI&fׁ2gI=a:cH +0`ԘS;Lj #66h$w# A8ـ$tP-rvBS*)"M#eY']윬NxD߂J#ZDFU'4aۑ$h }Zdӿu.ՓĢb~d%(~ޅg?J?pыS'+*J FR@4@ši<)/7xhU><pϮ\l霕 Ḿ-|rVvh%y|~nLVʦ)P(b 3M5Lڪג2q,)2_'AWZn:-$DyE᪲x l}2M8kρgGđe%(O*a>jv_c-ћ^Ktj%B~= LvOXE4ӲPZ[L㒍ȧ|u]ȿUK";*r .'p㴋s.,]q`Sg084a)j}jPU(DvPybɝY(2Χs,%] mk9It*Ic:o0eX#X(q6l ^rl_Y'%4TP { }?-1NctB.[+9ΰԵ.k{N▘ 1h+Jhaʖ'{8sDlb>?b+^l:?*]PgĢ_Svɝ=)< g-mZ7(":qdbXAR׊lcR2 8^"ȌHĉ(tb׷AЌ'1VY$IJNT^4B V \^=f77xNgD }!$uFJު'W5z: ,J+wꬺjj5UP{UUQM0vhAs,,fOܹSl%Mc_.:ok2]tյڼFg@sLؾ}h.Z-G޶MmLA:M0Y>kHϱ,nT Ϥg͔palvA[8]4!eXIeշ_q #q!iDdSS-nYuq7ZuB;rf7v׾~Ќֻ "w/6n/T0A nvS~3`:[(t{*)ڸ_x/Aa `kvP8a:S@44~`y-i[y'S+J]i' ̽3nJOޤD[bV.u-6\3Iu`g1mAɝF7*s;I#"qou+AͶ j>NP󁂚ѥ>}j>SGn5͎ݫgҵw>k_VqZCP-ڏ~:mA=uyum !u(^[PoA'@A=ۂ{/8A !h4C`A E}:E4S_g87TcT$/џ4W$T^/0Jo:XgNu4_RJe$3U蹘 (Y:1Rr~=h ;.@5ZHt*Rm}(i#<·nƈMx*觲fh[ Tip^| o"Oβ9f`J-o2J(2,#R c6N]$AW~)<Š6e"wI;Z({m`MKYAcżzYy;BXuwjmz?rj,#&X5}D6& |4r8Rƀ|<]קtI~.k.ƥ"܎WG'P:na} ĥ3?ț$}XL2b݄f#*N2x7 BM\\Ì8CdhVy>C;Hښ3O0$/z:CA&.1(y|T_P:O궪;I]_5:|߉WjLհȀpAiz׈zoDyy7-~-w[͇"kq im{aq3*[$j³:Hr'AǃFJvM|`%)EÏ>]3,U׎SS:_V%Z[m5#tKGrqg=}|a1ᯕ\'-d(1]&+8 uNNMGpx^t7+Da) \Gk+=2\J\ʒ1,Oգ1<rNіp=J{^geQd-[HƔLZߴk<92 bIIٽtp4_:u7N&6j#c9>w[y2Xݲ>C"zfhy7m6n$-@2Iv2ėu\b>?FW[I.CO^`y;|,fCw.Rm%V͗1WR K͝XZV *kRMҨ7nr%.iuelg9XkܮK%\\:J:sUe|Q҉#':<~38KD`ih,D -:Je2;bBOaaG[v(y^@aVTz =eC>|ISf摌7HJִY%Sm2U) QQdyitڏE&iJo[՞ٲvʊ4Hqd|'}y+~ޔ 1 9n$$#hnj 8@cG=&SUoؚjCqS<*qldBkl*N y(^HÃ88XkװϠVṼ YJ(E}rL3(DՐʹ;lU3=q'Z{J_1 X$@ 6E?-Z$%$Ż.j`m{X>e!įS9w~Y;~IO:wLnD [,9sBߗ!woHw=!_.0 ˬ:t~DL=- ^Qmt%L™RVUYpVz+&l!XifM,>ܪ>3:lyK.R:|ՑUJ p8b$L4:9Y~K3jf5N ݢg1,O6⨸"XÄ3 >8x)}̜j$6q.˴?iW6}Qf2$p3 Kew7ŦT19S(RB;R a$_fKrZ Ld48aqּ A'52Ꮡt=9C F8TaY |pG \:ufhCGYTVz]KݢFZk}Ö8wYnT|cV*x aewr"grVנƶhͼ3ALhb6DH M ln.|d#iz./GX[g RzRЕs1\Ǐz{Hp5&rmnIx6:pHx M}im6la,4F uwHbpߚcB]XղLNZTc]ItI C-5Yi-\eVTⱆP0Sԓ$ ̼n7dtt#kOEsl"1'ۂFv|Oʩ(AvvXmr`~G.qg#K۸ItCQeo)28 ,;|Ja$lFc18ŀHG<`ãVD`98'4y&&Ij P7P5FZsE1B\Ĭ{#0Ep '0u%Wc|Ul8tf>?|"ь=CPKjX@mYշS'G#0Nnh^'I"3eǽM 0Ǐ$8,-IL=1kA:|OtW0-L@ƋY,"$7SʼVg^.96\ԀѓL1-9aoɈ#EmZo1 v4fz ft ψ@x0ƀ,ЈJfz1@C< ] :|$O . :qd3#F#lXի>smhSy->!I&ChgfaVyRޘ%i:)_3K_8؜,<7{'m ΟϦfѸ$0bȓZnpBp|˱$n"uQ˪&͜hLAj\8g$ G?x.jt#f t *fvzzfU`r.0qMVR8i)WCOMolXJ8++ax ,mY-rep?8oف G{a$ s.H辶k# n!h8 ByM0p۾/&2psm $EapgMIAt`P"']DUE^YD,ؙ=pb xWvn+ܟ#w!ˢ1TB//G} s MeTf Rn'G`FTu |/9&8pMZb*lpYdFHP}W\u-w.Y,LW݂MA:`Wlb$`+nG7X R7fb"Qh R/=mZ!^ٓ\PΊ^3(gF/!ZI0j,,Y$8j}Q'3CǶ{JO ~֫D9ؙ{IþTFHp ck2,>bIVxT ippH:W fWՓt:Ԩ$$l'[-EάO(ȓcחLl# ^⏋y~/||s9E48]jk2Ђ=gy"?C~`0{qtcrzTz̴qp{Cp9ՉUi 2JΣ8H"bUvp/ȽtKKG7;l0s+xwuw;91l4։׸05ozۦ*vO*4j$ UrQZ^{32%t޿}tMxgҘ>޳ݕ%!npgOΫlLu݀d!I1gfi{@Es9f zMx-/ju GꪳIhJ ;lK[!;pQ9t+hX&[X2S8&CW \\T)$F $ B׼`xFD[„l!pppGO&)-4v!ilpPtvOiMsK5qSė8^o$\6q0ìߓt 3ap骂lywGr_T̗3ִ&;1]"\Ly;/t,3feIH1/qt>\!q&߮XZg? xa3O GǦ_}**_o8H>j5xlS;k5 !\p MWێ#ӄ4Y^"l*EhJM|Vgmc|E-!Đh5:if=L9Y::3mrM#\C 1ze%TmEf䘔?8'ai1 7-~pi&K Lo{:F{ܢ,B>oX4Y]) !\ WqR<W? x'̜;ʒ8)g4ppS6*S:cgZ1{\y18b?-CpӃs !b$m~J$-ofp}Ll# j,95 0sy=!:A*ޗN:xu"qdҊq++ꄤ (}zõ(JU,dg7{ )9_v-Gb%/D3&;ݺS]}idMf0&=.s|A b +NJGdEJ[MZ!Pm1RވWtqcgQޯИ:&8DkD!ɀNJ҆U&v*KTQY [#4s9W8L,lEu&sQ-sJj49Syh:I=g.y5u3E1*y[# *ĿԊsh⮒F+`)dPē^t\"$ ( /%٘a0 F7ii~T{h߷)ŁXKg(-Ƃq,bʯ_R? .uPd v4U@80(OLK-I 43-aaU7坛sx-}ݦ&mĩoϝ̑=GD+V/%+_1>iXIK8Nnn>ǝH񠎸 169ޥrgja3vYee@ڹ%ygtO}TF湹: ʞ|Cⲛ1 kkSs =>=ivt>oġ/FBCc772t GZ /nRYO:Kϣd6fivTEEQ($/ Hg֨7{Kˊ> \͵&w7w9mEIvR&u-jwPa\'82 AfA`XH퓰a1&-mjv3{Hi{W%* d0s5۶ŽMv'u=vJ-AvtWˉ @HR1"@k̙JGt?ke4,[#?@(I0IK0+S/kOiB? j4]uNpdPaZ=&a ,G2@ &/ pI5\+mX :wRF4iFƓdYNS=o UbZ0HqpB46MbZMYq;5D 蜍Xr$IKd:uX@ͤ s$1ԩZBS$US'8ԫwE8KA&c݈ektȗJ,=J'6Q> h.3/?YF3SMŠ--Vv~%cL-– d4lţҰ^>?O@"lYz>z{qK[P+NoEcor4zoy#dv-;["r䣞QvD+`z?}<3E^ S聹 #̻6:zk~>0$ v8R@V@QfrY6ݣ7=R|ZY0d:=;Ay6I>k;Gѿm|19dܘ\@띃Z/n ~gf{?i3^ xfMS0Y) `|5QwrR]gO@lZZ:nv)n|N 6d&3Mf>Gμf7 /O0"NĮ3v2jbE brS|ZdMy=`J2 Wi2?}M'>n fkŦ.KYr >437C0 9N-{\V lؿڐ v)}#K<_Ў@H?G||~.=Lf47{;ky].{.@dynڑT^}݆Zkz8X@Bů^Ժiޝ&Ĵ/׼$ a.LX]5?y]7v8TD&#SX[y.߅?VA-eBz T<NJvJ#%3|aڱ`_œ1wPYf(ÖgaJ][*繝Nuyv Yv1rAl<-&hW>]^[ԻHWV-x`_ж~+^Fntٌ /M^YpJHjoGNtzYk v_{!HVG.*U[c0%ȶ:,S=Y]-y {wMrV5w7Orȹ휮׽gnї-`{ ~4KEKnҺ#_**J9GZj;oWIdn6B7)5Δ krk1"0L 1(maWLH6mФACLR1i0^?֡hŽpi ـ4Hp$ 2< i`|*LڞJ76cG "4FR!Gr8Bq>,áS FCTkñnC+ U9=S\3kA \> pc;_4lC?`L *xHyR\" e+d@Fs!|2;b<0sw_Wh@bCS3U(rݞ?x",< HՁ!oxV}JX@ý^Q/˥pbk qjvۣ/=)4V0%nu;s=>w|dc2ch^4 u:%ziE?Gw*l Hq$;x >ixܱ'MLS3;kPi7c?:<Ӳ)%S:aueG9n}~B]m2l]˷=x #th _۷}{TF9Z9UEWƗwڔv#l6s=`}+`.>!"tT?ם ֗mpDq'qi lAc (?.OcJ1|l3B}4¾^0?H !a!>~g!`?t2P̏†V%5q}Pg^b{r`3rw!RM܅`ɕh(w! `I(yw!sF/}՗sFQm_#l9 R<[%I}Գ}cԜ!YT7 8)$4 vq/>v&NWsdH!ox "BM& 00J%~nG޹Rg/TTL2ނjљMz"j/Ě hY$jewMR4_L !e^t}MfwLu~0+\T. - N(cBଉ6 Ԇ3cKƄцDHo,m;j%m;ky"1cRRDjW>ܵڪOJf$ZH&tb"VDD4!&#aRrG -Oʶm;SV7*GrT=U%{WxD& 0b1L"J) Fh52iDSafCTmlnZvܴ!U|-uɰLf4@LBב cHH $'Dڀ+dd_MMrM-+U56JMZ9LlҒ+Pb`Q&= 8q [M v5e'VdμĤ{S|~8*ok4If o0Q; ܔs`זEv@*vS[(Ocm_sțs>Z"@עa\NUⲦuwYpVG㵙7,%ț` ZjhB $(뜑`I <2I$r[!(X)k &Y6O?az"Rf<۹?~d[x5{_Ó 𤭋61,0>|Uvښ@v *89T{VifxH+4yam]U wŮeho+ @ ͬ&ʀrPD>7AE5AR`bue%v?m@|ME0]9p_=$,=bj“ۋ ^lbvV{!ȉ:0<u}5[5T08܃Ђ%nx3XJW |b*E[jPeCm P@]>R`s2Dt?r`s?rz^ԪN.k6J|zessF/!= kx &v.n9kw7oKz`=xVZJB.`.DMDK3/9iH.jZ|xT< 7BgoٖlUgĠd]Lj?WqɚpA㠪{|KDfUOZnE2gîˏ2yL0,"Rs.qk (4*#ftr("(Yzʪ'j*T:Y ʴE6K-JKr :^/GI>3FA͠UKMPL}qAiQP2+9&!"H3(!&?2km#G[[Xe6ٹ 6Fl6~Ŗl-E#^TX_W,>P-=.$ ,b$bQ Ry iQ$f (1%k@7 %95--7h81zƌ/SV"&䞲>w1yfI1RkЭZW(eHP.=SgΛ\+BE! H)ҩ* ^i͐aB'x SG]ClXZ I% Q'` $<K"Ƚ >(8Z0s&Kދ]ȹs5E iKe ͜>=fjO ,+X&C4S7"y J[}6Ys>)s,GNY1yC YBK.`I M8=X %[amdF+!PU@"V7T_ӖȔ:)*\fC Dɶwa ʨ٥*_uPJPXGLK6ղ5ypuQ@Hi"- S~qTpuyԙIjx`z1#D3ңɜn䔏s,kSx,'G/ ty+r %&u(<ՙ.,u[K}JL15rØJ&@Dq%q&FG4 Ƕ?R / !\d\}Ȳ|'K8SI˹wn}dV'ƚ8k&)@ b2LBL"lMwlmŧ?oz}L} %63 Q\SIG5#!4}GBf2o Lcv`z--J58`N)i+ٺj"a$Ϋ_WTҌssURȬ)\Zy9%2heS q,(2k JGU`T[`;zi1W01WD4 3S4TtT9Lxgܢ6*.1]XRYٛfAhڪsT_Ny \,]1tWÛ77S% {wYor&MEsBmI}hWi-:)k\؄riӯN|IYȝUٚRSN/!lf(ijwZ適5l͖UP  JXBi3GH Ϙ ) s 逋,A ?~z@Y' y*`<_)`qWQ\fU^e:Ted[ /iyb*1+,4}tF_O`d-sZ<-k")kJ%d  %ĵk gj _Ҭ!t;d]J-][]^b#qzbB bށsk zz⊖OMkZ_]K j5Jk)YŠ]\q>S1mVmz|Ɗ #"R=u7\(qNƋDq_݋-/b{UQ|_V2u{RZ_SLnP[\htgҢIZ(7ж,Ch"mI>(Igd|FɟZ>E4YrH#++NL..s?&s^Z酬hD20M6= >ät#4wNpr9 Yߥ齜y;m3w{bv:,?IGΧIt0|48Ng<_fg=N;dsȧ l?hOP$luٍcH\]5?x սWkގ/4(o \7ar53mUn$.p>rpkՎ,vV|lZ< ;\_}]_*i:7sENgd|ar:4lYOI)}{Bj 92 x緝s7s[L4\]3g+Z,YK{ӨAp8*tr L)@߭|TƜp,Gݜ_{Z?Ӽ/8|XR8㊾;QOԅ>{YfݘScdF]>#%n 1t1ڽY\MA*8b@R?E_G1/H}.ǔ"3OHYpgSҬzyz7H}rrrܓ KC{M!}A?}йe? ;$Y4~8lGi9,6U)Å`jXC !>9~a>pːw?/a9~2~y@ 3rzܠq#Z 6Aۙ`w˃*ogmkPsh[-0܄[/F}n|[_F5֋V߇6<wN֑p[#fZiZoӰmuC@-6iXUCxpVo7|OBMPr ~H_-3e(N?(c+QYި{C[XʑYaUnTa3r7p-չyNuㆄ=TDe c*݉}^_ݕJ/QQ겴mز+,7oXXw{*gaWLO'ދE z}|ُޗG鵟慅24ƋG}:Iz2g=7 ?_Wb4"mӛN_-cn?}9Q_6SDݗ']y7iγ}ŝVpi9?a@c 怆ktт] (j RޟtZu@7ξ 63,y^ٛbpKzJ!tgjmJ*Z"t+YKiz1-rz^O`-En^GZdC[QsS=>eqыrw S-yj7]܎ (\ۛI^.LUR ZZny]] _M͵1l+m [yZSXsaW-@?yՕ^j,B-#kk[& XѭEiA_Z]CY+c}UX%ZMmg<}sYW $zMS_1mCzܫ0 4HN)2HNkaQVG@Py=jc(xܽ+Vl65y7L}.d X/P]^ \=似0{{CRt4LNˠycGj#jي _%JuC\@3r{v:o]Z;Xgl f}(`g⻳lA_(n86ϔ{}F!k{։ NWt75t޿{aՐbg_ۛz0*W{Rǿ\ +l_X+V7uOM{_X\e2|7>mJT//G!=u縉,Jf:#@je$פ4IMj*@B#{I0fT $sj Tʧ i)6!"pcͯ~op\x1~h8B*i$Y0aa– Pvrԏ@n>̩$Et9a4tDF,Xteg博!'هz[ϝ'z:ޤ؇oro#MFp.fF)5#ߖ>~oPi2jMpP7k]sjrX5U[g6(e6Fd|o2iPLMrG^&37vuGÌ1RZ (o'P%3 jĭ,{_Z aN*B*(bRPU@Ρ*b֚džjOf Vk&jI VUTHz7Ek6&ƅl`B=YUV'ZkXoW3F\3azNu.`r1T7-PױkV7JFWK *@Qe-cمZ6V`E+Q9rQ~\)uXF J7/)bE$w7Y_'ezM^{]:4J9eljUUDH@mc|UF-nGE2MeH֖ h,v`  ɇ*ɨA;4 C_glҚCC;A ΘH))jj-@ph8H6bj(YެmD[JxY_~b}`3DgB&"!H>Cd!1ܹ&04 $$<ل #BP$h{'8#8E4%Fg֖$ƖUúp%޴D뻫&PgPh8H>F(󣸖06Qlawx"[C\|I "ZJ匁W>P(13?ƭ7q64a$4TyGH Y82 pp[7q 7܄m\pwP[-}CTBZ2F] B[ "p8vifh0H6̫ziMzvkxZ(] lDHp\ls:gfLB]fLo0h„^jQqA6 Avetd up!etyt,IG՚r^%n A 08 \Ɇ4GPH#P|aY;ZNo̦90lR+) .E Ld~(0I}wo-0&&mYžBX  .%1!9vIR{+E%ǟqbu§uР+8?1?|=ŶmU.sJ4I6$+,ƼXlA=  hJ`l(h,{rMcQʽ_ֶp h d;60eW$XDW2`*5l/+0ς/N$hpFMplTP2—~kȞfwM;@]P8h$ ɇvN2ey h ޚv8PkA!Ym 8 AVBX҄29VArҠUT%ʦ'ɇ7DLMT*Tf KE.G5'cTƠ@) -Ov{(PqG.sX=|PkduF\ Ɏ8W5-|l GQ\Zсi Fewo޴V,SXn=lъ nU6N`|u/X6CbƱVa;J-nv7LVjIAt&C[[[OJ'A}F9Hgޕ(ei: ѕ4'N2thSjt㐀kK=^g\bdV &B,DR !ʆS Fq[cAK ɇdJjkƠǠClq`>Bu0zgaa8H6pppAmÁpxmEKVƉ[(BA*oq"w@ɾf_CUIgD];=;n8r_@*#tܛRjHi7$ 0FpWf/XY,7\MPV˻ ͷõF>P`U=M0)~kd*^I|rFXj> U}&,Oatjb &_^w$G?8[7!]p76_tKTJV]S1R{&EN 趰Q$ ٘{8þfq 'ׁQn\M> f7L3>I[ͮ^sXݵ{v>iOkMgY~gkԴ|9M :w,cu,:o;:_j)?$ѩ.5hG\I#!JYs$VʚmTN oG}GE=ڶ$@ *,WUݪP\šH3Yh6)8Ƭ e,R<&A4xE9\1h~3V CZabC-mZ pLֲqoRFH3SQ PDP6m EơෂF}aG @92Y7!H>a`6:NGugkHKUdR f8F>d Qq!(9l-Bj:'0VF AAmdABV'BfRu%q;Hp 2] t*ý'TH-Z]Pu,TEډ5)l_]c-{5 H)SlBP]Hq,Rz PZ:C0 I"d(͒P~9u&bZ2\O~fŅqw?~Mnn?_Ġqrd7d;[fÕVr<"!Rζ?]jsdYqjzhuuϔe𿜇f>QBpY'}0H6._ܿN+dF1oby΍u~3t$)cgDeKNhvaX'*tDR ")H6 %>Y7"Q*/n$`[M $JQp|TNږ9[WV3BH-:vNԛ$PXIjBe뒜HzwR*rHѧFZP%)H>"7OwLݨS_]c,k\[5U eSd,8Yh|/LaJbހt|%$q31+B 0<5)H>J 'dUyM[ܣ4?mƤu4h Ip|L,d5Ir=,0FnnIY +V T%oJCJIKg]9xAn4$ +t5z{PpE~JX}ܲ*!Mp|$0=In-6 e=LC]&) ,ATB )Mus`Ix14Mz >u.,$GY:]MkV48gLI#Zm;rXbV^IF KڴREҏ-k[kG\ZVk*5jV.Ak.@DxsOvBWt E# WJqhrAQ1aoRl 揉(MmQsN?-߰ uhSNbʔ i',hZkKQП".@xhIL RFs BPYHId3>34udHʡ&aNRl)Օ.~Wf)#kR<$)*vo9 jujˈ)PӇP].~vfTjp7⊏bChBSϛ/L$,δcSNRܱe(&G٢Vk&=.'1Dzv8V.<<*f_a~ÏC@GRHC 2i$Ywŵ#ڶmF֠<8&]jQ-M~.'ϛp1Qt[—uoXO,O|34ժ:oߜ}0?7] Ϗ7 _2~чXՉB jyw.w4Sv&K5(iHfrtNRQsMޅ0Z4Z#i`%1e0HF>*{\N|h l%J’^|_ qՂ#fu~aW:JcT ImƹN '}{z:oT&6fV wH7ت)p${6':PU8y,Nv-&ǃ_qH9PyQI#:gEN~ߓVMkL4q j4 #)@0'bA hv9y\ TC0Z0}챖IsRłUrd_kfUufh.,6#<$S]mo#7+pv̷235vw6H68nVBlQK3߯nImɶHI6[Fju?,X*Lu?r?',2ÐL?odzʰE& " g9{jIpckXxticux4MRD9F !a)m[ t2L S sJ2R"^9c)h% ]G\Uyؗ0bOZKMZ*9 c>o]v4Ii& 1Qf$I4?-ZQڡ(#G:<QIBle;˘& `EL.#0c+*IT A-P=S %1 \E !8(" I(('K( E!;CZ>̯Fc*FNw쭧|ƫ݂xӥm͇7#sPk_)_tsڣxW6h%^3CH;$="FrΧ36aE RRW[g3J> U+YmrVA9r}]uc%>T|_YqscQpMn. CiJ! @$?dTEW*\Nt\? -l?cwis3 [j6qF=l=f5b$8*}uzRCp;Z̻Qܳէ9Mjkh#d 1cVM; F _˱JW:TQf-`}Rb휅tb/AP0]Fev ǰJeK4qc1,x„Z ]1l`C#3l\IM\Oh)G_JD°7uv޽!ӈTTRzD4b+v44bz,4R%QPgH7pu.8:UK[^ <-:f0Ji~B%nAzrAE#j7H%*/!<_-n(Y߿GWr/7{~.8CN诚uѯxXjrf-Y["I7p_4/ˇ2V>7;*T cڐ$eyޛV/mBR{ՇOWmʠ);1R)1m|xN|3F>~:b3`S2qCJ{NՃ7'7_6;î[B×y& 9ctx1(M)`&tL,ٛ@&rA\E㞽 D z@6vChrxmgI+K1D̦2`3sxh#>ʺu*I?j`n#h!8;P Wy2v@AX bTBaWƍ 7)= H҈vub EbN5 1B c1))z-1 Tc[cwNĄsssOw>w;{l@7w/=0V #~8o?a; 1]DJShc]6$rHIn\jΜMQ_gF Bzf@JI `-&BS=bd-&mUgL.(_;3$@%9ͥi߫NRJ;ss>rsEѨ8%H Z,[qXC" :{Ox%  HpA9W"kes.gO}I#%W) ѭ02ra@Zǵl{'V:}.__uzy.mEG-UoqYbYxi%UUPS mY>KVgxt٠w?OPm(T?668$P/+-VT݈LOAa? ܇?ش倪nwYOO{ BWuyWtg[\G//갫Fw(řk?kl49c 7* -uM[b]!_G<.IG'<ެ93\XK^XlL#mNonys=j{8S%q/8=~_kxdU]1^?ʖ_ (ya%pA,Ov$F p4#fHʆ*rw3j;ZdqRh%\e&ͳLT4Є4aH{ -և5V~u2ON8&8[n8$8݆CU) -ۓ$2MYVchF璴,ǥEZL2Hb+E7ν3O:[K?sNvՆX}qupȱB[-qQ͙s6we%9(˛7T{1QT!PK Qcn#"]JrRk•Ԩ?<!T#140٣G}ۍKlqE+fg$:HbmpG!15wMʼn#DWQB2n d]vqՆשV۲v9oK:=Z4x^kd"]S69Ħ=+vumLB 9XCD ;|2܇ *OfB3=<#|D(;e>FeHΒ^VCq8WܘP3 +)g|,yIw<&b1FblL&Zn7KJe2k"r$NB2KE"rֹs"uFIa,;e1i)A‚v'U"˨7"bz72zh,iзި." ["26rm/Z81lpѓΰ)֤V*ӫt`2VND)F#`(|N  ϽuH%¬(F.H &ߢDƬ2bCkb cHcL7g+[%]Fj^P4tR)1G0IpnjPKiNN:*WmXT{0G'M])ؠSXo8yXS $jՓ |:M:WD^ ֪9-Fb7 Ogx&A&`5㤩UgUZRIU] $J [_@Хē[<-PsRćYY M `݃6W{csyN`7wmkiXPo= HKf!]ؔPJK$Q:+Xޠ:oPN: .2$Sr?D`eޣJn QzZDBl҄=u?ɜ>pQGCT6WFNN`BB̫ ‰PbRk`)^+7W WǺ)kR˩F ֝ 1wti(' BPdd %% 1QDTk5joByi?X`s0hVZh\]G6 ʖ^} NB_塺,j<BAu @  v>W:#= 4P\e`QiuJڡ;z[,4@QT `DDlĶT >zFN(v`F9vMU$d(.%+chI:2tqoO [[Ė)CwK7X$3nRPH 5[-jIR3r T"!]f3 ZXLSJwZ9a qʲj`mkQنtVC y@Yk2τTf[F%T_1*BL $,'!s]X;Vkhi,LA$FTl)Q~:}:fݟ4O3R5 ʴD(̌`…DdP6pDVB4:꓂ŁWiZ5j!)wYU*,$\dY5uJhTZdgwR #eETVgBN 7hII0"q; d@M\ۡ_ٰШX@AY:f*frX\5/ ;PJ^'Pl,t}T'}{mRjm+od," [Mz&ID-Yd1U,*w`N 5Pap ԑN1#u X>I)$Atn{Jtf@BwR5@heJ5&-ɼ E VvZt[1HrJ6ֵ;_Ӫ@*!gdHM_# H'UNcXOoLLv@8fqWP%P\ {:t]^e 6geG$F4D!ꥈ`WT>3^f iyrmQC*‚A uBAx~%yRN}7/m/V[ $S198 fz3nOdzn\yp5I'hۅ_uggv[w.Kys~,ZZirjwŮn{`{9#S{\.\N/׫gP/L{.߬kwW._\ܵf՞ JZfgl:aӴ>`t 4:@f\ p p p p p p p p p p p p p p p p p p p p p p p|N3uSpSh:5#:u(Ré3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:é3:_SsaNNq8uWJSԁv8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSg8uSq|ih{08VZA7a,?@5jonOKQw֛F˶^[,d]yal<;>al]4wq&`5ۙ%k㏭ 싏(`_jwѰoSyVoDԳ??_ꭠgj#y;+vz֓䝼vXv#\ef=zDVKoojNFc2ۄ檹ltZO\́`!5fgVC;c>eǤ@2,ʙ3s+]Kxg ǹA4=v3 b` X`f&`kLnULZO4O`cU%Hg@ş^-sb뫷j^7/[^|,k ?M$zykٍf-ok1_/mvy->.z 6)05CiC"8\57[.nnoBZ\m7mi6`AQ-4۴I&W'>U߬i%6Fm|gWEh$.Zm[SJ5چ7TWoR䫭)XBX|wO F')z4(WW_>9gyWۓ۞70lyW^ۻz)_Ry_73<_0J6Yx??b?ͻ?MIRώRJYZp Ͼ o鏼n*2Ls!6" q&`exb.7gdՃfj.#dMj.1k}p:/]3Mlͦ(Ĩ@bs(N XL9jr$ b'QN X'q5^;gC<0wÁ`*ךfVG陟 Xb/ X<`_| Vnt9?u+N7GԊ|8&޾9nt۾9fy!6Q}3#nj~bp8X0 XVWf&`"Rx xى@@Kn:,+9f&`%frL:y6+\FV0JN,rA`PZ XH0-:,i=o vh< X Ok9\F6hk\&X\{Mhvg{o# O\fp&ݽ۬>YhYrqUB`ep#1;LTiT uDQ6s HܴJ4](2f=&;q7`L2WkE?3azsI]rsI]O,{r|Xiď&>hף`9Amo\]_om]>"^"4Džݷv`VM۷v$\؁#>.~o&' et4aVM[tNdx֩ڗj_޼zݏ%mnnKYeOwF~y%k-_i ָzTno[~\sDߦmyھ]è] dݫIJ;OqЙz <&z1j]V@?Dy{w ^ͷopstuv]o`,zgjW_n$E^ަr}7qusĕm?ŸO\s>u~/x!pwJ}ïdՇF*_vBQ_ <|bX&eY$96?'Ldqƍ>35 II*휮1Cgg=QBh[-WELTdT#kE4k0ɉZIzj*NXl}`*9ˁY6ڑfe]9tr>CߥrVpJxw\ټT [AMjMbkctRV>V&c>]HF5%W*V<[ksg&{i?9윦kkle8N1 9Tc*<[ja9*9ǹ~A<4jbvTe@ĩP1NlkST5NjT1;OA)O/s&p%i\*rQfX!`h՚썬 .gzl/e,F6#xGH0zM6"@tU' $*,mp9H8v]6qqau[NXx"%G.7@xba쑽ga:m~n۷WE"-I)S!Xk9WO`]i]wDjW:Jld.(*PvfP~j5O -&i$HuƱW{ ip+&&ԌyȡrW>Yچy(!cYH=oLABbNlq^KgH>'{?"(EfWjW R %"aaN9xv e⥛-E2l=HXǹ֑"j [LwDwHFCE#7f KE>GCHʓV]xgn ʤxZ63;mP.זmE6 TǨ'{(a!)$W*HLS)vj@BG[ vkm&= d"Y;"0W+0KOvRDlv 0kK "Z_1B@;` )j.:3 <6[,A:z 1v::BjV\"]ʓJ^P3+#bE N_@5A:SoVR }cFޫ6̬+:"ɓ*렀x zaU=PFV= ւ*y6CZ 9E:!xE=#D?F`^~U vYTJU]E V Ѹe0 &mZYx,[<-LKcY-jbG\հO,RO.x$zh ޓ* -٢k;U:/Y^a%LK 7Q'EdJG%;XrHQnHaG 끉O3 [q~8f~{' _zm尊)JfCjlP!UdPbm?{VR9۝=L W:ڵVԑ#8u!殛.Ike)E<: 0eXl3?dd-t:ZEyQiFue5k?fN4L,Qi=)5Y{`XP4 {$!  }qcPPETj-Tb;(poh^HT"P`ًߐBs53 #J(b6W-r,2z*qx{ԋ(.JQYd'׌*VNbaR%$ATPdNq(vz6vyya B +5ٔ%z?pDi4Xe=Alb5RM"ET6IS]^Iuf8XpkJp:=6Pc"7$G!S/[4I IP0kS II͘cDTQʔ\GS[[%-vI"28n[#ZIԒJ2F>ʪ'ylܺ1wg!* n[閏``]j!`RZBZ酀 ,,+/Sr8BzeY-lP8XT6i#sDf!`ec +;dB >.%8_zИŀ xb?-X$X=2`Wj!`&bVW${'1-&tyZJ :ǃ:Fup$ph5/, d9nj!`1d,-,Nnpc;F+MWlכյ nAމhY^տ6\М}pp_ 9i}!Dbs6=K{,x~:$U.N+jxXʜvg3KYfR:(3xFrdӬ< E>83{>3F:3ξ\m~s'fn7wܟ5n`Pw7g:l{7u!CGﶌ& Rz!.:n5p*X np"-/9^żܾ:qe5@ߟVX҇S|l.wx;q/vwozj_G{z[^g_P 8[ng482 ]G37H@gN0B,/pw}D }\KLZ+KЂq5rZg:T`#)WIKI-Rykȵ BmD9D(獌ohL֛$iβZGܽs(},Ca,oSn /D; k[1kvEr"/4WCgH\?U.ukϊPB娵LUCD+N 1}WSj{;->%AS7M*Rp9 vd I+BiU+V5тN\tYzi%(HbED+ R !$Fhb82(EQUtGC|aL*}9J\*d5E3%pJfCCZ<6e Yemu]e-#lК GUH`1"TD%5zbEKUh@:Sl<3 #M򖦱Soh:=L7xo~O$X(ML:8WtpsEY+kdDJ2T"V:Q~CE 3({?֒` UyZ]8m͑XC8!jySA/JX 9mCa դu}<:"b)BGCh:RaP:HvI6ZT sڂj`, -sJkKt !ˍZ`9 #HG楝eS%PZ Ew'-kM7Tmޗ,LLߢHVl8&-?b5xkn蒱>c+/l*5d39j+rwV{pIGڶ[+nd0C+Ŝ@|UU2vϫM# dC!IK—j.2/ 0rJ5co0IplYsQ~R ͠Мࡥ` тmguӰŵ,E\d1e2Px2yL>LR!ل ?@5H/! 7š Tྡr%\D;*{5LZ/Xv|t X"gE/^q%<ӂѵemU\_+S p` W1zFdB#Vn|2UI Ne(udjӰdߏL"52  Z!ݮZ XL\v`i֞ηӽ^S@0ÏVC:,B ِ*>G2$ؼh)ÜJj_|8&e:ڗǚΥI)bM!Fpxݬ!5Yen/$i1iuÔ4%K?2Ȣi袕)hZ DdBn$g+&-m"LxnQJɜDZ"  U;q\"uAHOjX4m%5Հd/>z˦RJF$ _21 4P]PA@9K5FKL0@(rqƭ(q炤=GPi G r4DCc!niI-o8q#SI:|p5.-جV* 22!C%Pexj`8kෲ ܶ j$p@VPX6oOeWk&5CUkJ٣%bw:$$\R1k!(9"ezvtB#ŭ"1a*ӢYElP#?×! F0碒 n&XYDTp-)Id$N0ʉ\c֮xRj9EѰ R*~GdspY "S'/y@lCY]D!ܤ = UXd 3ѯ1|$d$ውWfjsAAeuقWIakk%Ԁh* SjVI&A 39\V@ĭ8O x8VÒaǰ =b?af`#Q!.FԪ.9*#)GV2x X_og80=!=O~hP}Bk~OʛHThXH^QH-ؼMGфT5x'>JI"eH. Y,k[S*DH B~t,`(<ES0f;"5$/ E m\ p`]\g$t٤Lk fVK9mCpbԚ@<βeJ"$[@6aU5fB,-&[ h7i#,s2`!nǏ`|F~I,QZRH%F8)x552%@TRL M8gz7\0f%'x8GY9~˜SƵIs뽭#B=K R9&$AL-̢8,G PE8ayi%NPeg^"+6Aw[H e ,ͺ r0J@HO R 'ٯH8I3 DY+1Vv LtV#qL 6#J h0< o H4 <줓&%ayB`Q" M V|Eԋƈ/.1e5(zi&>R!J$*X(P`p%)7Ix_kx%˶]Vn-nNkTGO1)8 FzrHoglm⧟|?z$멶M⇣SoK8Tvw%{弮A^띅뚯כ寕ׯ}yV,}:zSq\\o?]{w0ïqv ~}Rח;.[ǸXA?U[x~ ^ӛbg8/OeKoinrjřlWgg >/HdZ9ư޾bg*K@Ǿ5o/x6SPOIחrPLF례҈PA#롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺롺C  p*J]4 ]Cu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TCu=TC}z(M =#=FblPKREZ|P_JRŮzzzzzzzzzzzzzzzzzzzzzzzPgq%/{D`/+BW6;W标n9M] ~k|ɳ{weXF mk^E=,rC9R#o3KH\*핲3gktia X~i&`#\~!`QJ=/E4G5/Q3+f%gVk X0DҲ8kb0Ga&` ZVx`  ־ؙp`.`'i&`&Vg78@ LZk:ȸL ;C J'wmMnr6gq\5Y;6Y:SupFRD? Hp4+Mh|htk~"%PINdTRz23k}X`d`䊈:!%qa " ^t0om_4ش8?.7*u, ]F$-fb߾6b@?zͧ X#|p o9,)eNNmYd2vupPY"{\jPŲUo 6.b+EBZu;*E\Ԣ3n[73S7W|G*H^~޽R ;楗ۡv0C>x~q۩vEȇ 4A2Ngw *-Ig4O|hDJ!uTGp;7(SFHJS.i"`F&fĪx7x(4J&Em3ؿJf),tjڄ7(cAbnCt;m=8}e+l>@f1"$wϣ$`lbo>SE5\t95A!N&O0Q|}y%_?'l"߮nctb]'Nzn~IF d`,m?zs?%fK>-^_bm]`]} S̴Zox>vYiR4lK?r+A8ֈ(#cRZl# Ž%BbmYb VbeHS)hZ[i9f_1m'ZУG :A?Y~ݳ :eb6[v_6>$YDi9I[z8|XlMqh0&Q&z -);"%0ITWz,rHOLPU UmSRȭh-C[;,VTCpm! bS!l0[zD3f?CV[7<H BL! |0CqMݢwٽ&6UqPݖKɓhAg]#O0AgG+MbN҇uq.ȰZ2"EA6OG^ : L%6"'?>XN-+U&'Mx%͚=} vqi=|_YY]/܏׃ĭ\ ~ٔ*׭*^wޟ;/G93$h%/\c|.aj5+?UJ;J>6'(1atd’Υe|N&Ƙ6pv;x $.Z#0կm & >M2O@ifڲDJemv4[MfV=mV'/sb$SQN &SSJB >x/Wl+b*$_R`*TW$W[ʵYVV5#ԙRșR޹ѕ'l sÂY]AdFS;fB>}Y} k4?OۂA4Y`o^bh,:Yڊ2I9LId/l6IK{Er5iI]ņg+ @3WP=)tv Bg)tv Bg)rBp)y?v+-N=VXypnt:K6L, @,Y/oWFw˥ѿ(FٙlO?'n-lfq/ڼ5horo 0Uh؀g N9b iCDlYdk2!1A`Tnxŭ4[˟L xVnJȧ-wT(}>3}%4yF\W޾b,9VƏˡ[{0- f AݮhkbgPrVMԆdD+E0"s%y.9%e#'pd"$#ekw.){a~b&\-*wOv}p1yrg΍Z^)SH_i,diU^|0˟%,96eWn|U:[rW38niPU5MԠ;SF7u4Uןxt?akkr"+,֧+}2I&CY c[)|[Lt2Sū껻 A}c-h)8>R*)ߞ':j56ܟeĻt=' J+>jVOE2i1E",Eͼrr:mucKb]Nv lf%鰮/,ƥK2pA&`U-;O<7"2Ӄp_gaM6F E7JG Lzx۪7xZuh}^wsMq)m7kmYä[0;nUK VEӼHr?11BJ2RQatyuA #i3)upwӂ&ᒵWJL8@̿fw l)7(l3P*VXRyZm<υOFIMHxPqrHjzuz KWk &*;pBX33_cnG0b"I+Md#tE Lۗ_uẌ́g[;6 wq0o~zm_qlfq<`^:ȓ7ځ.wda{~ڵ`D~L@(dRy** XZv~6ȅWu1*Eǡf۴k9L!ThAu{pߑt9a3qaW/;BK.ݵЛ. g'k1Ds3Ι/ _k6 <7؛/]IǷO{h ;uț/CJ W&Mt⻄ɐdwI$wYK~eᬬpX8't✓TMIUIr0XqU*AszX) cE[DP8}_++D1M!bS_S?D"Fy=kC# Z u̩T< }6rF-i!`OÞaOCjuIƯۉͲeAkKw(9C!oK,|9qҦ(_/= *2+b+ךHgcø7SӎX* M!:] JN!X(K̘"Fk.2Mht,9H3ZMA;qDy![PR5NYp0DV Kqau8Lrc8 CFi5Erl qtC)8b4 5"E~LGa&|&1 u-q t=jTe S+1^?u 2O[֨!16:62f[8f_:>WX̕f*yghm.f06Rm%d)j\I*\npXUdve??/.{{r#<~q8G0ӧ:85vTbz0ڍT (K9/<`,̑6mr"ze8+|o2t86W!+x`_ ll\%ou0(w߲I^M˷^i?Z^3ٽuSa]MPMmbQB4/ê?_=zѓ\XEol +wkfz/1ORdr:o~}tUmԵSۺ&X Lƫ*?͖aݼe'jNCD~]Ro CZQ[)RRy5t%Eew61OvhFK,J3v<-]]x!JMQ1^> ]D}Bt% "CbF2WJx$U)&k#P.ϻ@OT˩ӓ Jkv0P/"ܘP“2fALpˈ *"J*w/P22u!KPVb\f+ We=PsB!PΦ0y8U.Uv|ȏ=i9~5}B c#1&cx`BPh?4A;(7it_ M!HO9ymQ~  k$X&^ۙL>)oFiʛv+4Ɔ򲟅h^ vO J)H_|RFhb^oɋ;k^T`qR0@2h_!%MLidžmm9HίS/|KdSME:9b>2 dKC-JgyŔ7i4^o9 {MάUiӝ5磈 )Q+PSjմߗm=#(ϱ%9g0`Ii$Rn&&m'WNUSh&`FlrihGŔ7i4^o;啲N3 H$Qe(@iERjhiY[4zjܟ>Zn06:D"00Oݵb-@}` Ar*Ӣ($}jD^Ʈ`XqȚKW}eu1?SE>Uɲ1{V@)I €ֳbI(p:rE.NcĒ{"Mց\ *BcC3q-s:K|v,[i bר 'bho"9bY̌Ns"G_'X'Z7=,JT8'285E$hv3?& )T%Rۮ8JT! MxlYju2 xd4h{>c=Fck{̔%z(Gi\J "d^K([Zb:iJd@K{pS_"[!&2!Y^rQX!LrN4' qt<.|9B>UmRq2G)e &Z 1eJx]ھ˃"bT~_/seZ.4_n[VMϒзaOHkk"e8J,R S$8ISAHUI:^Y ,Pկ1pB&ee\ XFֺD vywdgcȧ\٠;v-BD1縠,3a.iE"AHeC !z2$M&)(7f>Ge^9 皺< \W$~6ڊ6goM2J9-!fQO8JU:tf)A6^^!))ll7cxT#rmfSge3B]Idf`y0 E2] d@Q}'\׿ ~0 {BtȽbs(\g[ueo/nCgԇw|(bl}5~I:5kQ!F_eFn/SG{C!Pc_5OX2t[~{;>Q1/y\|U|CܤbpnNS >eɴ#6RC £停>F#kQ~8i/Wn|DBG@R"$ٹdΕcV/5QSլDQ!$ dqӄ [fڜ=&+IqTz[8⟖pQ@- ʒ8/bIXgU4C?M"kL&t(^2!nԄʊq ?q$~||Y.;% RxdzǰLx{C0,zo|"]I%^1(JuQ}xZg}tʿ;$Fг[P`*xˈDoCcO˲p9DGh{g =予㡍`?Qi괸¿f PuM| (yآ{:ge_t"2I),^c_,"T3#&#jP2-gzJs~r-vZ@͊6s\PWb&,]xD&Z$|0Buq\`qPSk6e+%7D[*o') #GE>QN󔷋n'`{E0'l$ݺziX.vw_M;(.Q)9Zv3[2SYua >-y=ƌn1]IWy+9cw5 t詨ӡ9~-q>F[w=HA8al42 ,pvZ ^r4 tRDʊg ƏR-H ܰ$%fUuSL`ׂE̺Y"ZEl6D /h okTLj>Z*KwqL"xӣ(*WT:̏,;\C h8T-̊r'0"|2%W̺>FOUmEϻSx^TyY";[}?[+4"3'CBޔaS:|ME[懠*Q4Xʌd9iyFrz0|RShF;q,_G藯'2>gdJlR DTѠzb.ha6^PEKXs)j4"*gBCf%#IjeAt8΀!aXw1. +T2W;M|2.Y9^\dA<(*w"&hVVDl\ah iŹ|?i%j^bT]Di<ΔfI1F(򉪭K#)=IN*nݍ\D뜖ucQߙ]6ĺˆ"WSq2o~fL2:v<Y]p˭ 6ڭI{7؜FB9I( "vl*|m8db.ruDȆjp^@t4UEC!)IP>ڀ59~fDqF9Bte҃``㭷r&AF(*b{-~L%@/|{\V?/:STp]] q%%gb=~;oJT"^p!"fOnD @F0LkUVM#Q.g.1E!N.7ck+y3U^1 4xÊ~am] .╍dլ=^ S2p1^J#nyIOz7bc&hxV:.4FA hFR;_s>D%2Q$LC([f@e93@HbeC uY"ҊEa/Y?.q1~3ou\0vb*K. `B6@N&;p(򩪭|#f4 +*W>89KZnĨDJ6m^ 'm^0 OtRьQɒ~sXCf v  ݀"ZהǤ "JK23"]®!Ƭe'VUle& \Wyl W.Qrc.`WQS[j„ 8 ✣iۣkQH2Zb/i${}bՎ:K|,wqtGι7<"mń9Y Lew403-6 ?函qkb?~z~`·ʛ ky;n[G]oZNhY|YiJFQA+&ɈڗPbzz%pl.TŵaR8kpGbvƎ@?r1 aCqg/Ǝ(T3 i(uK3 ԴN@Ztڄ38InoV.A"[q9'y8JV\\ u~HCBVnj zJ ܜ^o6JMY{y#F) hx413 @7_f]f5xCʹD.?tյcU=RTÝXT˫}[ep~ ߜNU=[0)fߟvyugu>([G~_şI˧j?Þ~~dS?҇e&%&ݨ2lRbESp!xP2&dHG0(ioA#RSڅS 51buOQLqLboQS뚪Cu 4Yū[/k\1+%ILĉ6j*e_f"d>#B{~ׁj%pwjOhЊ绮^ɧȀCuE[?x𕁩t%f.ϥ4Fy[9E۫X05Qx'GQu_ neJZR <-ra.|hF¬G+(*[%Y/*ywF JlxayOOdLK{{r2SgQK&M%AT'*[/xo+έMYँˍ1\z?qR:!r9/#ub|2NK@ȽX3? e\J kyOy {bcM?ai`Tz|@Fyr#^02mu H!1ı*8\ZBf4X`!T)YwjℇwZ 1e:Y(Jv shfZA%/olZjFٯ%r-SCC?K%]ZJjٲeu^b$eg~aVqǤs}g,#VPSլ*yފ$f KEEl뮼"T2`qlPSiT;|Q.68NX̊{NGtz)TI_mb,ZH\;4pנ݇kYHn"J,Ȧז.[i4C gH gdKe3Z|SwdZ|u?=vS٭)Gfչꎬ=}k+ɨ. Cgr)NsTUggX(ngpp9hTڣnS93nZEm.s2EFb9dH1W!}dã}'+v%Da0HAD ؀ [g]x#MNb~RӳK` J`6ժ05`^x+JqQ,8oM>B&~SUD9W)PSl?Tr{H b Ce`+ LM܁c%&V*J4LF&ovnUS^P%oFwn.(lqFFϒ·7PM6 ͻ]!%[2C}C< (~WkYJrܛ= 9s5tb40LӨ7QfcΦTQ$ Oiɛͺܘт]?U,Yel6v>TGt#=HCr@Vqcxc{z՘^7]ͼTy["3 g j6I4EC0 G!2"CbJ|a"Ha- \ H DA9> byNsj|5?*rlWCnb˿Z^M”9m/hڙ~\\X'LR}\EFDR(75(=xGxZj ۀpz;I>\Ѕ}C6CIE64eھm^s[Aoʆ,|2%2*"Ђvy|Q\ 5͋ݜpI|Xq 22VĔ)ۋ"|<)B/,Ąn1Ug]Eݚ E[iZ5]TR}Ȗ/vy>vY/2_ 00WV/aW'68Q,6^lbMSJzcJw j/zy Z1#ycq֦ku6"Dxi[6٭}(1fU~6TU7A3hoaLZ8yh_2ڝwc4߁s7+4&gր=n[f8w3ڭh}w f5ƇIvWj"uK{qC8Ft}=69::x| E/_3#maoO`U:u}40XkkLkRRt351O}-wѓÑkxSK}uS|_)^E~ax#kq&El)>`{4UhRa}ӗE4Z29D.tWcŦ}Z$*db.*3\p, 4X$%,GJWekTROq#2]FUdeV]엱X+E >0ܶ "XDh x:ȡD{ٮ c:^Oh6IAy AO2y4Rڈ/H͖ydR U,M;樌^E~)1Rvqe8$‡W1G5h X \%S`=I$QmdrCNijw=q:h\4Ľ]=9> -Na0_TqO'v#vsix< (Ny ]V}yB (m[vIJ}0ִt6X:\|Jn d$ϟ݃GFXٍPbL"#4vh8OH+՟Rka9P#,-n!{N=6;`O{.װ_J;>W_ F{|zL@) >wvs"dw\^!L_#q)~'2#Q*/cL#D〡+pEPxQڈlO {d0G F_ OnypwOL0!;'sn0 A13Po0˨03$ğv;y|!OyeB 4/{jTK[SBYhI V+-PJq 7?ܣoH\iSJlbkZE9ܮk.Y:3>|lhۑqBH~BDNt-H)=AfzY* ϊSoMTA\Nd/Y;ٿխmLf? Nx|(q3Wp %'\u-}j9iὸ<'\[U-ݫ":>t]<~V>#)ۨrn]9mm7b3sst~Tٝ>}"pQ R|/ ܫHPG1w?}iۅ |ȝTS@'{Hij#IJMN)moH0yD:&zI^D+HdW$읲R-y}YP[)=E-s{Bڒ>>.K9>>7Ot{ne$SojVM#cLm th>>|Q'Aʁ=dSAvA2Aз3(FCc?Nj̥)"'eN>9armP;Hڈ/Z Vm| EHx>dzQ6DZE‡Z{($-E B,-{]xIXk؟v$VAJ,;{}i:',M( MQ"Ud5&:HYyrKc΀w|I;s"IOI;#PűDc"-9_5~<?sXёhb*?DP cX=33'yGS'=3nV1%0G0PBC,@DOkzCpYQjrkW"=Z``ȋCa(\Zz'U CbF܋mh6f4p񀟖?‘>|=S` gH@06iZveMZ} 0ߚ?A>qrosFbV3x<ֿ4WR8/ N)*dV0A1̦Z,9 ̽c1E!-',2]ϡ/,*nќ?޿/s1Bٿo+.106VhxH(HICAg`+ LMܙ%&9D[ U|(ݯ9j WTxBR"캹 ֺYRPJT^TwލhGaDZmUٟU\ <5@1YY: ~tɤz:E! p$Oi?dc/.MFum̴UV(HHj,6V4NCsӈ K|Cx )bI^bԙR±@a N#NiSjZP.A9w M6[0ʞH08)CMFzW P0M!pD<[ \첯Sh;2FN]9 oPI؋ TI?4>Ca1Bb$e[@jw y3t-Yzd$ypet1fLXa~A ֱV3Nq46׸VЅB#C'nDI"k5>h!6aEXhOc| Nh|<a$nmR. LbQHa+]B]0.3yȓJodyZ D5F8 R1_y[ znpaC~Xɨ"yZ{"` hU%04 $!;|,E=_S/co/ x`( | H׮h=Acj-\5UBV``S5do}ɔR#k̍KbUwy^}^v^MZ9|QPDK.2E+\.&Mɍ'q>Њob;NF0!qqV&uywa9`/Qse6B>|}'B[kKЍLwZyW)^KMu7&|m!m]s.WI\*3ӽlWaxn\Iu mmY0{c$ۜ^$95-).79ٔlLg~=jYXȬGW3$w΢ Jݡ/q2j CxC=wl2Mk۔6Wj )v@1s:+x#avazo  ޑQEڼsTv jq(xo v, %xN U؂"!E3Z0Rksi˼+1 1{؈_b\Bs".ppL0w!|ou槨?Wdsp5 zF*M ms gIXp[р"ە:o~4br_xK"/Ǩjé^-PimoVt)n¦MTI|+Zi@4ЮEޛ13ƺ=5\ z4ݵ/=80>/EFg\5QK.2O+/k eުY&v Y ʃmfx(TW¹ rf]ck6/Va= ]u>9MVH4ǧE lLf:J9#.ҝ,7Ez߷UଢW~kux+ n/=À/:'ԈkJ||kY'rvQ {g7{oDo;&RosgnULQ`qux_հunktzZsiwe Vz}|B'XYh#,E5/>kG_c.=z/jQÀ11ca.E`M!{FnnDJ vWeQF;E2bS`0k <% l.ᓄ$O > ot-ffݡ62j1w{ |sфFOKWsSx6}6mNm X̯E1~ܲBzFyѭ\"Ӓ*A䝺[iТ}M7NF *Eč B4~B=<6NFj3߮fWzagMNTfNPQ+sۀ*A0c.N;7Nva6 >ľQ*魙s7>i)<`-(yw5^%KC,aHţ:}h?Y-أ_xK{ dT,m|d7eځ]L/S錣h}0b!r'G(ߣw1l]#Vt7"*D \NkM'uE}.9\.Kb*1n·Z.=nZ4wfj.fK'洂RTTn80>Ua)I O$r "8XD;5٤m@ۙ=8U̠fP|IȚ«IڂEvsN~co^jE(O;LTax;O|]_:{"5}{kAR sť#mȬuXKn̊CS՚LjdU=-JRQI}U!jugS%U7a]|^"&[1C1i6E P3y?>Gɨ`J.8YvLqy^6ߟ:nHs'2x8*8s%AeS^nu)=煟QPJ^~=O2`Z3ad4.$yQv.z/|`M[A ̤8 xp!3 JNOe_,mx e&7^/A!V|oh#vc ylrdžo.2lma4U0-CvLZZ%O< ׫ZfOe7 rl##PtP+ R1 Jt+PtvsR=ADSMEtt0&QD'%v25+Iun QRQIU⬿n5>GB; "(]OF($(^g~CoQ)Uw~~rYfsi.-T_SIwdɨriykͲ*)V24n{8Z_;_W #1`,G{2pV{"g+~q&ʎQC4Ӗ-r9mƧRO}cޮF(&ү'(թkKЍBwE[8hR$6/:{P`XuړQEoxdh3VD[ od:o+?%hޛ<r'GRf|, ;+  O7ѣKɨ"vQn2\=䀷#) 1ŗ>V7̙ר}V)b!ľW@CSi||԰hTfV)}M0+€TJ %P-圥2NvO[)PvUp6F̔ ;h+?bag{c``ث'Ug#|~:$ zs?GO:#\>_Ϧ-7OO{;4y^8U P.oYݡhaJ<Z/ ֠E1=R !dYj[p\XO6{kX=0jd`i*3LgqO!%|9Z}//0kT`rQ8W-GElγ+~Z&=ɨ`x[@F1G [i?mV gOq -~=3~>?6[n?$Deǟwvуr`á~?]9fo|йuG^4VA:`A#v"o52ŏqjCz8~՛G|TGQ>h;hak&g9^G_g˪=fX6t1'R\^7oLzaf#贘5Nzh],-[h-9 -Yw`?+;-^o:$U85Oꊙ1>@I^=/O;_c7!}+^)fo?5~UNNU|:Ɛo^A. U"` NJݳkӰTbv\7$cVcK@3a/})d~+2AS2492K,h7N22K'>ң13~C =5`޿l;u;wa}_E$+0IoU#K+Dy[8UEۇ?G9Q1 D-W9F2Sus 6>{9P>aOU(B{ 5M;NFBũT(J팁9_< &Q*l%yWǟ7~*UcKﮤ@ѾZ=Q605zwMoS` ГM4@Uǧ<8-,Bdq'Jŝx#tHўe]Za]-Ei2fݡ dA涠q*p+ڍRHVί]+5܉{}Y,]@u Cdz7o ;M?f#>oϊݱ|IO<[?v9]uN?m+<=7,IQju,fWyYItdNJ!d L$P_[ǢX#Wew0+qg~7xoᷜ6_$qOSf2{Bw^JQk>3?n]W?]{^}أ8~HBO3ׇǼo6i֮fNVnQRz;/gOyoU#)an~ͷDSׂO-abWwΣ=vwav[>͗}-~#%]'~raV~ e:("KU$H|K`b (Q:u6YMq2j48x_X,YbXMM)kOg9El8k/-v=z_oÀbדQC촱TyVo~ȣfU!O[Cx=L2;\Z=PBn4 Vx&8o_M&/ ì{` |arfSRnjXZ]0IPI;NG ag&k \ 'ZDN|o+z>g ߐc흞Fѹ;fVӕVY TQCO@z5nqGʨ@E҇MƵק8rxJ@ 2p'`gЋ +rYMfk;pHʃmR[YqY6[&R >\@` [dꮫݗy}-b!re6!XVP6A/j@ [א*fȈ1\_y`R6IEvwpA5#KZdդ(ɲd-M&Ůb=0ޥj|X& cOx2gUtqz,S6Jx!'O 1 1Rvťe￸wkBjg/pK~l1@n;6U_u2m/ <)o' سfÀf45] nf ]?-PYciƳΏ0,?.)]\/f4 ˛^kkϸ%eb.?/tArnM[I\o\A$vrw4w~ 뛤N_ϹҊ0\Cyy^h...dhBQT vt9,J KpnPbBS#D|X1uZ_] _ }p,窾w nGt WJQE9vK;aq<P!ճ eC5g {^AuY%NAuMrxT  #1cA[UpM0I JGɁsN7{Flnu}bJvK~}Rīeg?euͮjؚzZ؏iKSE&#DJ,WE>9gU#ngl蛛ٗ|%9^\Sz4Ek6Av:wZcRu1}6hxLWX %qb0'aX/^Ae& <;țH"X`u0N)fٹ^Փ6o]zyL14?_%g4}wt}iU0[˶qŸ,횘BX(+dOK`#d$GKTbb#k' /S-epNK9V1 9L?e˪t#*`_j  Mzh+J1Χc\ЉܰJiý+'6,ўo Nvw?t+nrwOhy(\v;zڞi[߇d^3rh~=)W@-[,]9,&DJPD"iH40RC3B&"A*\[*sM=hR!orAl1s4HA620B[w }[u49{m߶Dd .W}sᶽOk/W4rq,O_ӌZNߠڳ =)uI;ޥUZ+t=h%-ޝ ctᡓa4ҰARn^#W; Ԓh&N@AG)@1a(5%xBr`gnߩ %'9xV,}5`@!JT.Дr0 cD,TPfR"\tXlDI$A96q~X*ePiH-$F!H$\J:V)jk*gDՊ'1{R!8(s`=B(%X iB_ >.D! a}w$CD sDiw@/xxGjaˮ ̫Ia4!@7NfMbP+q!PBZQb5ցk%9} PZ԰fǫ[v&:}OqcG]BH`M(3 `M`U 1;8GABF"D '!ѝXr~&ӯ{`vإ;%%Ιѹ];55H^ZXF*Ox-s"&wYVLZMm`)bGL0}T ZPMmpz5ggMwy}\01Wνf<;M gށO6~2{f?l*> m&~X6 {֔3PYejPb(Z~YW}v} uF/\Hz@XU6P{wO qۻaU weM{$0Osb]s~uV/w2e1AK&gy95gL0Vg`77JWP{&zp앃ߞNzqؠ`}:0a}J'dSm@6 V-&W(M| jpR'셿"'Jc?t7o:uy "Pm{U۶(LSωr;O;v|jVW£dD&+`=(iw6j X8}/JѴOru{QBⓟ~ߴx=xY̭8f*3%ay7u7qe.NZtrb7.6 !2yx/@0xi <3#{p?u9.c[qSXR:2c~}248FI6+ &cZ7tς& 6/> ,?oǟ~~Oo߾o~,ҟysWX[ >/_zǜGߤ~Y專88 Ӳppcg~zCa_7fjQ%Ț shA_Jr߿o5䑯 `?xwժ~lgCЧOOqL+4IiqhֳNǥis/mZ)x.k'u&vv;'o.A4V1A*.s a^ !dlT}Hhg^W;yniLjS 𾱝:1RQX^ndy2.|42Vd)|K;7ӼviDJ=K;]s{mu!E_n߫Я]b/E` 9sPDeAh;l@/!E NW)xJ(VJOOM­#3viu(׎7l9>jX)tfxuf!ZrғyCs<|wK>ji>S}g5?}U<3>}sɚSYq{ 3yK=FL>{4_>Z7< =}Փh4gWO7%6K{ҝ2]^.oDL~pyyu˭0y=>Ȼd" ۰Ћ3g33!nV:DJX8W912"Ier^Kܼ7PM$ZG$0d֖kk{;}s>쾚wX4a4լu^a!+$QYRYNZY5uiz//txb|&k%jb$:"BdZSj!@O$ЇpBzւGff~>ux徬l~ǘ,VGg"yN㍴yt>8Y%p+ME\ 0ϸ TK)jä(o)L!p}ڦļ`=C¶R, 1i 6|:) #qJ<1]>N`4rC:IHEjJYKX豳([CjsVmCĭ;k9JsKw COW tޢwz$?Pҧ1`'"R ]@`̥KIR|{ ˕wy&s!)?9*&J*s.,>$jĜ>3o_WX66}r'tcPfݍy {ҸO\,%ua\}?6k=,x;H(tFko>He% ! $jW(ŋx/fCUVŴBO~i`JdR㉢.D8S[91MA2AƎJOl2Aq }3f{|43.txι?qÿ¶/%w$*5a{%)-6 B8,pѡX|氥aWkӇa㇎mxzmS Kd;2cT{QaNFܾ)luCOay{yEU|x}\ܡ-ͳ\qsJ"9F\@te8 (%5A[v)4S}ۿn&Uf>29 ~M[%0kpz}2عa/H%Eŭ]H?[t; djǓ$*5g 4ٞ)Xu7\d&)g$WwI.3U՚,gQyB^ON/$@3 l:l(^)XTZ[G=}M'{xt3$ #g4Z3/ !ne_il_~ caRC.S HL\LpP&Q$4 g1#VmRci2;j65%v꘱N6M[..>~׎<FvvSC! з+ݏZiZG00`' U+Xk^p4yɞ5FNmwx\yR(FY.iOw3٢-g=-ܱ[<%- FɞլRLVrqwyW=W}^0kdZ ,=~@#-b{V-g+P<)>Fxv~;6hkֳv yR5ꀗ{7R_!au%pj5d6P>馤 al=XК. gEՏ q%gvCze-7n4nug 'RCRY:`3T,t8 DWt_o)w ]jśO;1_Z&|Z~gf Sэs_ڢ~VE][&Mf2ESmN_\{RT9UKZ|s;'!m>+x3履&r ;^8 FN |T2JI0 `2d%wۏ1L3W(D G#D<<7h "IDA9rD+oz?7FDD,&GG09,!pF~S:q ;@H$hƵ^N?^mq\ ,DI8Y0I#;U`oFqJ\`$Mϰ,6y51)w_{r?דn)GW*XI (␹a:zZ[uޕ? (ȆHcQ`hLDbG8"%!! HKV :VA lwGYjI!I#CIp.}$TYLQDAZB*85&ppH5B0ъjezLȒv _t8X{)bAzFhǛVl2 W1}A$a$\onqY\%8[^ f|0F3" N kze@]>K , ",FJ5z0ۄ:xTGX(;D'mp@v,akAwS7>U!Xleb$d2́Oq)ٜFJɜK&C 1)0'~]\\}$^!-n5# nǎGep()aLN"BB<ĄJF)M9Fn>|mܾFP 8h6aHvfB_u [Q 8IC 1CRE h[9}h"K# ʃGΪ[LD$ksך7FCr,Z_ )SM5V- ݏ(P]됱!Q SF+C҃s57N^K"X08# "ZPC8מm-Pܻ-C HH dYY"MN3ʀl|Ή`#,Qd-#1NvQv|LUVkcMKP+ZRg4t+Cnstȱ&’^5OIWB/W+j&:qMIG  ƂB:&`Uxt{~@8= hcyDŽx!24+Ù"pUe4;GRT;Gb;VB0bQJ:+T!bb=#d:ax"Yy Gep(?yM3@04b& U9Qr9$ FU'EOS=vƗRdS;xTG KQDC1VISD}d$UzG().28:13@Juxg*>“t!I$F DA> 0z٨0.fPmht7 y$^ܱAX2!V{092C^c4LNUWDH>iNL *##I|QեlU]Xpvaw\O?03G)-gbNX'lz uX?{WF忊`,f03i,'33Ȋ˒*ItXTK"K;h'.~$Α,/$ shQ9h*X˜dÁc M_$6= :hde742T_."PHT&gFd*Cjw|.8"Gq0{Ddl?<3gHC&q"{Pr5*t0U3fGA1phsGw4r^@U>\Ą>t 糘rj_nsbg9ɧV)A#38&=8r~3fc,E4c0K_أYTѿ!sE/vFV5ШH!P1͓M}A:O*3T *;;bAbۦWȀ,#+C@ 9'ю8*@# s2ZtxfmLI |.e i3,VKLH 6 1$HAP+u5 lӰcAz>ۖs IMzN@/@4Q$3l5TPiV❝ɸ g&IauWaA# ؘԼmdao2V'M+W|Yh}.^_,fXܻS^rC<I`_F^_j?K];sJ`IbA"0x^2nI/kbTݫя3F0ߌrvx2&h 0~tw&`c0H*- o5C(r9}z;h]$Bp.`=Z@My w3< 8 ۩i8`)ճ7큟%I |1( (MMN%$D@:(G0kK\ YWBx5#ï ^?A9R$Ю ysiiގPL3A)3{?ݱU./![ 'Ʌݡ_`߃jv< 'CԲkX6RSHdoo\v=l25X;a/ǃ^Z*1~9jor2SF^aw}ඖTl\, _A#38Zdljx:LhvgcM6]^kBO`fпN7j&:JwsգiT@piqt/6HXCI`q%Pl9`!JfR&|r%[BKd- UaGb"Jibk=mQlR9l=(LFCC`I <28o8)nʭXq"8L?-(Hx\Am>}'`E0! RJod) DlဣQOϜyT#aJi1R^Ճl9uVCbD<-;]ګՍ_y+ Jcá]yQ&bi:q|FT1 &\{Tj4'(tLMSVVst(pHb Zp:Ut'p[ē`Z qf(x\˘<+%FH'+Z4[`zs$F=XwWrq 9]\[$E`ҝa3 Ͻ-fF6d5[R|Xf>Ǣ*Ȅ H (hYU]DAM11Qg`1F$p0u wsVDcd?IdrǢe[頑ԥRVQ9Q_? (Rc頑yE+s>3aT̈́u,0| iJX7i2T`g1F!0P%O^=PD  r XW\z *D}iF$Cl?,Jx)69bhÆM쌒e'vrLYgt3 쵰q = YQQ+3%p×;RI @v+FS ^.>1W=!ǔ>蠑9IK)a{3ZL⣱3!d(2I`v:X4+uI[6%&8yQ8AISd n%EmI&~e,8х@.MpM/ M$ev%POOHd RVA}&JHR`* 2Z$l uw MJ޻+|`̠$rr}HTa6] {Qw NxL.R!`ò7KT@~&c/sgNy=QbsI]Rd9|ʢ2&'(Z߷ЮHa` UrؖUEmR u:^4ɸ5nft?aQ!4#0KIlxw>g%A1e;0}ݼrJBiH_w[Cd 3ύ7Y`dzIe ?La@;v(}i JRﲿD2a [wj'A*g%<ܻJ@(d}q5]%i^nEW͸jfR0[0']ko!tvK]=%[ 0ܮ*ø1x,R(}YJT,C=!av o K( $Yy3_٬:B$>{ pzyFdCAWlH-ͷO|gOb-5?~vDfq5m.LԽoȎ~4pǿx\L7O1 //gsfVY 7!/by?EUA>큷Jzg*¦s?i:dyڵrSnTԭumW1ߴl7lE˥wJ\ ';8[l꿮n ],AZN obu l~M+t_>yƈ 'jhB iid@`wmI OlFC@pNb ;5ErIQ7wRiJE#r]]몚џ<%s!g72ܰ@ԟf ٗlx2s`bhS@J7_h={$<l:N`|&Q]yKX;y϶~Wr%$tK%dUXSCş/?oAQVD :"\[0.VE?F8.˺DŽD=$qGPP9@[!V{ EX$+ͤ \Ў@1 ˨2=M!]ea!ĕڶ%t=!Vj(un|sY,d߷&Tt~.ʶVh'ۮ?.xA:C/[BTU$\OrbƒKMu%QlLEF:^Xga)"!OhԘc 3"#Šq d(`JƠؖP+PlK|w1WBg1V`y!ʝYH3NŞN #8 6BQBHxkxBK,` D+mX[j'-.ةc !龽Ci<B-BH!Bj=_dHdM;elSUqZX$RRL/%RRLS504֦*#NģRk4oI :/ 7\yhKіx pE"/%R"/%U"ƆjA(8ViP(m ^` 77?j%m.&u\K\ի&s"$0 hV$AQ La%eL=,} =ơȖڅ"[]:Zc?Yʛt3nF^fbuHb.rXF`4C{<\?6Czͥ-v'KKMTA^k%mʻLEN ^ʼ^ʼU3/<1`a]RSp23;.J6p,.4Z{[˻VC50a0,M&MғC5iz&Rc j`.0eRM%f1kRAƉ5M*546+gnβLiTl[2%vk]wi0HLEed"˭Ie ΫoGQzC=W(d|8glf?Mݸ?%@QI;4NWuNHVE+`15j~+WEgo^ټ&4wTwػq¸]<K'(uk3vldӮdzrV.'偕BI\I[J!Ja}.AGOV~V].TT0-3-:a͞Wދ-k@ܹ:$;;e#GHժX)v5γ?b׻6'`j_I}"̕[(}&E킉]wW[^!u3i%|)[Lؓ?"[aGfsb&W_?F 4BI;VnTXgC'⇿iu [_+%Nz]/P&UԺsl(pWGwPօ6RU 8Ӵkd%M3u"&Ct+0Muu[{w Z۾T&D :j(^-3XS LJM:nTnҒ|_vduW :p,=t&Ct7r63cE1X-96=w.&cy3Hd{Zu5J~yw[ h].yEzڠ'o/t-0Ry@ ߴ\VzcvoĽ~[p#}DU҄~4Ut)MJ%9&yU`M 67[*+tQ_$OET_$OEj}&,E Gz`FWSybЊRfV$&)di8WҶENZsL[d)=Oy|ϫM6W$%{P 3rQ#O1T#O1jmFlYM #1;FW^S0N ÀFs'8 JvxNZO<(`D t捐:?;mgTS .Q:f4a΂o;zCmӹsk&Y(k&,5m+FWɪG7zITaA8}yi{6ϻɘOl\Q\f_ک*=8bU.38[qG [d8lΏVVcv(1bj%HD"iH4b2a BrmZϭ"vЁ*I>8@1sռ3Ҁ~Sa"L [#MENan=ѥG⬒M)$ǔwA;k(+<,0԰Rs|-8N4Ѩ=GRۡr o%'潁g(e OŸҊuL)JIo}ʏWBO 9Iu+Rre***ƀ-[pHG0 bpX8'QYNS6@wե`"]sw65.mubX[ۂg56 ѻ:?s&^ϗSΆ~vv: ][YT KX48* 8 *,+S f W\PL%T܂` R݆J_P9i)್>:̕wLrc .h QQʹCV<1 Ið]t3t t/T(Fq#M%qESN5 gՌ  9ȃh}}Uwp (r TKf{0|htEXh^Հ3e:>.dx-]?'?s9} z0gl[Y?W8\B';LRaXݛ7*I _Fa~T TQ?%4YOEs$g2OƅSy\rz^]j(bﲍ`Yv5_~v"( ?aι^ Fn'@KB ,zEw^3@b˰Xm/#L6 zg×ll *B;a.e?&. Q^S@7g> 7 6?7oy{/o~޳oz?H.IP]k FkHoS" wK.v}~6/S7,d4K/OW[d>_cj^?\9DC*ے!CO}/XE;؋R %%]Wtżsû]UQìt W]u `sYtOnɺ#`=R߇cM$Xcl6 OL*$%[ _IqHJJ9${Wu U&9AN.?o̥'m )+Uo;&fr2WFsYiw]0r)@ d<'Q 9VV ΓG0~n&bmx,ME*"2pPEâz` ECы;+CrڞD?)%_ToJf[Yѕ궨Z? y㓕7x7얮[j_hEW's^;gq^k9@CU#zlXQ} @W46UbH`!xʓ 齃ӼɄ`yֵK2uye7nܔʖ4K[tH;2 Ln&L>AoM]7;].Uupsc$ m2rBjK3Jvwz&0 R ݧ1кez}0ɆNgm6X;UmZv#.<_qk-=G5wyoy v5o)p]qȥ^cϞfmmc޶YO=&trX+:bs8'&SC>_é yu?YVW+ȊkZo3: 'LYRcǢܨ1K*5O +67gy6N$Z8' = y]̑raah'71tr@[>iP%zUChjcI!g3-qC Ck_NI>蘧>:1 5+H̍#͢0$S$6}q6vۿ]?-l;ԯk&%8XuJ"( ; ^{)C-k>J[&77tO~ݠL~Y_R,j s<:Rji+vZG*UTܧ%IEK ay}A( 0!!j PrGЬR&FaXvĒy, 1H&#MOL \ #")/Ŝ# eivia;`=Rws}й*L"4AA;H5AT~_ߜTSLQ]=Ø@sZ2P)J/F H?Ww^|_{gu_( IU!Sg)R"8ƸaHalg\@չOǴ P&\*;I! J+@Q;X]眫O7ƛa(Ry*չ&,Q)Kw#Os8DX2Bp#1E`!‡P]X2Ĝ4EK%RfiXaL\"/Ey idSUw&{'s]{v&L}h=Jn ibT"0Ԇ)1g:J'|$^KaIRl^# 11#HȜҥ ¸q̵Nqo{/ %̢l>$F. 5 H cn8gNjY }مUrEΎEEuD I5* tEtP1b͓N#< Jdխ-2q4.ě3Rzz /ƭ:\c`-Dno笭sv|ֻ"ե^p@TJRTC{R)hId"^eG[;wV=/{' 4lh$h QP%J4~Y(Ӕ1Т0cy6XÝ]PD=S*Gё8Q4"9xEy6-bw$\ b9c@9ekJbTYuM6;x6ާMb,Azqw2[ Z ǜ a"g0 ,yEդU!RZU2 X"NЊPc}WfO=$У?Xx޿XvXh\e͙y,/Cq%慯|m(d* )K3)| =EtI ³˰r-aQ99KόdmCsf yB9Ɲ7Q3步M-.χL`|%6r]e02}FC90%Fb)$E/ǃw?͖ȶjvmYΗjz^*ti{/"Z3(QSƍb8 .8\"ʨzJK`0_)c=z4.ٽ%^;%Ll-i 峖Ͷ2)BE~8-0./g(A@M[kq eK  \?O<g{ɄBQe1E h.HA+PL $oԤLIHBv6I׬IIߦ]^+Y[^Z¸^xڛcF/`JVG0/Le)6ޗ}1PR)2m)syCp$ed6l,V(ޑ!PP>^j$N{NAW- <&k4 ?Uyj ?hRQ {_j}+EC5U3(,q*.=furafsQ^Β\߹jr1ZoI5_$gkywK>čӶz̩AW\OVCeo2'AgYfx$UJ‘Vz8̝od@`սU-w-Zu̗kl'9sgf̌%5̡|ǔ≙bmW-/nL$ǶMז7li1Ȟv>d\.F9OaUcΒn-6Cפ\nCRI4|CjڔW0Ifb&@;B}̟Q)J͆ }.ݗ6DPpCrAi6cV{V}uCgoǼMceRT]ܬyQ7/7 64>IF_E_̵{hxɾ (&awۓ+篞Rd‚A 7⽧]žU;ٱ{cvɀR6t\OU%U1aLXXCZ済ݮ޻s< g&RbSݥKWsYw.+Hu )2BK!eHKzĂ`z VH#!= Eɣ뵄t01za $q 6*6RZʨE)$ёDP%zf.Rrt1p;Qda)qRD= )bA(L`r"e֔k:wfe'HQzn&sݥjx53vGu`jH>Z}B5H72["w^0_ c%Q”GS;PeVZ)tMKa/GtFF/Ys!H~`U;ҌxV";C:JJK&$*458q!+f/D1kx*V%Cjv?L]uc@Vt1vSdž|Cڒ p%^G|| o|բ-H ~gjU)*T[dp9M;9)~] hBG4Ё V+ $:+G+:(v3W@MtN8޼pk<#o^~9FvUWdnaZ|<fUX*') |$ec*u |ۻ7R -& r) xiQ0Gyj Ff)E䅬#XC=FWAfxRQ\Rh1DX^3 :,3ZN\Zcax('-9WPU&$H8`$&Q'Y$̜T+ #[ ʔrdEpeB.ګ0Ny9ޓcjMUpvB"8{,] j,LN#*e#FN #s\S+P7~TJAK&Ilcax= a3r% V)HqV9 0=`𨽫ZnPEH. #J2uBXK>WXVN^xj劕)F7`|{BB:#LfVcy Vc{"M #ѻTDas53sͅ OX9z[I c +&E2cIFr 1艙0wINgCa~x *)hb{,`6Uӕ˖) 32(:Mf Ѹk>%4Hؘ9S5NWXntgk&wX˸_X}fWpɎ1U Wk^;s!q،cٞKߖ,o޼\*_K;YQ,ފWcbc,%PH:i C-I 6 Vb}1TuS~pSb,V# ]wʩcaxHþ#sRk5'9MG^dɁ&^tϨWۺE# */BBLͩkJ08IƤDԺϠ.Np'bJ%N{XxuܣmfDN$QHP*Gz,- {pp6FdpS_щP*06ˆH>(ca@ g5L:yhdDD),$LpxRy엮3~G ר߹YǍ֘ߧP}(5 (TX>uUܣt6JMjհ/骍|j1]}Jk{ڀڛtՆz_ҕFJULW^6`{xJ}IWmzj|dWdM;7 w4zjJJW<-uOoKzuy8\X?6O "z9ow'UdwMd݋Q ZQڑ*E!)W_bnLEXb;o]m,=lcO$-.~Vr;w?u*bJ74;Mh/ ԿԂ|;d{"iMvAq5n_S;?nwm_峋<[Eq&(* |Lu9쪆lq|܎m!cbT;hBVG !z=[ךڷ Z+UiF)Tb%mwL +#3:>,3 =J,U&&D1yPmC0ߨ8yjk듔.[C֡@3?};TF%*}ka!L4sf~Ko^Չ|E1S,?8˴Jq>yOV&ܹ^ۣ[+//nOj. q^ J bu*nwn;/mՐ/Y?N!+c&Y T),QRMY70yt:|+9\FrJ:WJyS$ȦXvNS[qZm>=7x6ZJjc94Hb}sV Ѷdy\e`;12d&%b.gU$W XN$J($ -Q#9C /ffۓ[]\`Gs5WD^?n}XlW=K>QNz.ldČ_UP1 2R"f˜V\九j>Y\YW.rZ/vu QX~1eU3YhdG%8)?7<YnfP?Fl5UAPêez3p=7vz1qg}ϯvz\~ooUמ+_]+iYOfyq v6]@ueU2;=Y^vrg/勋BzbE͂$pL_(!S^,W!p2s=*DУO|lg&bI,JTlSRe^ٝj𕺕S5PuuEkz|YXȲڴB&vNVD>u"wqڮƹyX/o뭦ŋp4&v\U5EFPh$e{b.FFRfFU҆',+Y&VoH1ii'j &R &bW59W\8+8`D*JՒ |$OBQd43: gՍy-`apr)#I/3^k{{{a+dee8-4I(*R+r7ЦT2"ˈ#-Yiţb)ۇ͝v38 &Gt n[5Eb{KdCkuV{7:ӡbĽ_ eX`~iҏ/r{?~p~tyLŐ;pI V(yS-sY-$p#m@Mg@COBk10~_+7fVi4-=ߚ1̾~0)LsoZCǢc:5XW3[_h[Z<ؠ44?!Cg!-" B)Q&LFSFlA̬漮u P`#31|# ]FWL7U5JG.UW6ml@g LVfZW3夔8n?~[OӯfzqYՍӎwyj~Ụ%ӝ((\RpQ\IQkOiKy,DE5RvW_bRKJNyy[nF5#+sOKŤF*c7(Zk^=qKFS&a%5j1h2Gj mHAToW2&j[R[ZT41-DDG!`WpU4GCmvLH<8ή`I7-y=S92xH S{[ rKxj|FcYvۈV1w{방 #eS IiMYr0cXoUg^SY͈Ob yU-ŪmJV߶g ZRg&\CCTƸ&6xTv^!ZHn#+lt4tpet( ܯXx++d,th5m;]!J;@H]`U+D;D "+e4tp$t(K;;3ˢzpS9*\4`!xƑ hgRm\.yL4yP:ϔUfxa5!)@>8j(I< E(h"&RQABȉqJYɔw!x0I9%̂ۆ"$~ )c*%Rq̕BNqSoq ކ<8񛥓jPZXP垈rZbrJZʐ)9 :R *e&/U(T"7晄9~* CXFse,F:2BE#JF$9}".̧H-pcKS&̸ZzRHWz7${N9%b]W)Hxz 5PJʫνkZD#<8i|Xbj7]`Ҿ`efꃱJqd;rƪD>-%UQc8zˆe7G:>4d _fGKŻA>qewZ:UI :יqquDjoz8u|tDtw0@1CQ!"/,*DOP8>ifh&16ĕ0r%9,yyA &WޅNyn>5 'B_mP4wW;.c1'e.hİ8E@tЌ#`yG[e*5lِa-.QàXQ\m۬Thn oQ{-m:F#O,5'R]SQ,::LSzLJ'V\#Y0)o5a[-9w\tw|H{;8/lR(e'G"KgПd[ c@Gg}#xnЙYKbi+f,WDѪoeByy{R:LgZ?$[hȼc^0p:Wz-4<\<=ב6\GXMlkdm].m].}?ntFsAK( L,r>j|xfI1*L˖6Titf%i8ej;|]TA%?AZ[Uq  rW<s_~}ns« t1eȫ|Wz} +iJR?̖|`jCM/ >3tNvg8nC<(/EF M5%ʪ`QAoro0-+Cy B&Kw _`VL0ն2J{!䅌7WjY1̷n&8G3XڙI̥V:yZF!&d.f{+!6bgWV9ؚs_,Z Degz(m]t::T)!TDt('m+DHGWHWYmDtكzp9[lzkGW/BW3ac Tǻ''t(UGWHW*-ct(JrmHL 1 K++hɱ~o\ȴp͹שV*srXnʥV+ω6Y& R4K)ؤd7D6X/i,mIGrĻ!FIiLG䏏[!\Sq HM,K*2SVpk䄐6*4 4>WY렴YeKAuDfx֨{{@A ʼn:~_T +# 1,T22Bt(JbhLd,e4tpm+Di`z~E,(;3]Ճ{mОz.EWPm$AWCUODDWkBWWX *vBm+ 5<"h r ]!{WRutut55EDW컶,%K+2"œVXJhYRutut%TEDWP ]\ %VBttut({› ]}^|OyO_JnGܯҤt$wcie(:Xkv4~sxm?>ӫwjAgRɇݽпB H&>1MbٕқNĻ+?5쎲5抮a=дc|K8?fЏԍ=_tEdZ]\AR@rȮcܻ!Hp¥*DHt[zjݍ>co2FhҡJA5HGa4^dv&璏ht7fz0|&ŧبbRJ.'ԃiNn\y<k( j| emtka|C=N`nDWw> 8ό(ƀJS$I,KraSRd X%3 4¨:|A )@^X(yg`cnShrK.FS~ENN5->mrt' =q5hbTg ")ƃy~ss!cVDqE1ەV%aѬs^^V[jTN?~ֹ%UT)xEu>ȱ9'b#+{"4t.ޏ{V:GZ[菧88/fuuù(H0ss$N)ʵ96׺\I[ZŁ{[`3A}ѭ7WXWswI0ާwa/4444A,(qL1K kcXuǻH6!").L:@89{Mo~FU/z{WC " bmO❽m(`vd_~+&Dixs9!>J4U,X[rMScOΥ"SeC=<쩣+q$7EГg$qX`1ؗAU.Ԓ{ %SWJJIL)e./n#T~e-_=J-FSn#O֕b\fx ǝoӚ#Ð_l=UUK½D<҉WW+v2HQJr=Bph }[R'Jq쳳?r_tzr9 D@\]k7ym4J1Dnab-c)eŷ{K>%21Q_2/8d?,'CʴXپem^RʵF &}[z+)mEa<-e_pݖxP"[:7Db#&LzkjI;@\hR&M2Pz'd^CD+sN !NViX`QJ'1S(3Ra-\;WZSΤLvـbFMh]ZdŚ${YvfvʽșkF[ odVu.B.,<3{$9kr,m ͿVߝ_0 I0*sj\Ɋh -s^AD>nߜ:W?4ş[OqctoGW7vW(0(%n]/w-uS>jƧ8>2Yñf=4$6Nb HG#˾a2o_<6υix[A59?S.%ontnѝ!b}F1qg֢2y12()+0˻rD-|qr>ʀS*G6-(~:oC*`brDpټK*jbFri#I#ADaC*2yTHi*SF։2dL(qvp]_(/dp 8P,H6JOQq\z060[|)Ipl/zSl3Y$cǽ>(L4߬>y!q{U*Zͬet΅@!>}V8u>oΎmEҡvyvol )\Og5*?#1yѢ󔨦.AwvT?,`-X=֠97OҧQ1I./9t%8MKCMt:5Z(hUۇ{>G2 +*V6P9>(ìyOxn֎'Z얬yՓrK3nb-fSOgn 9RsgJ;'?ҼSBko;ddu\1}D Iy <> s?l)9HgSj"`0dqj" 5P\^ p uzq9Gsb>~*8m9$2.{?c;U9ٰqHqfpey^#y'KN^1hE~jpq׿~\jr*by0MT^*lb4z_HBu$zI"ׯlLpQҰi`qwD*P)yUHJHDk )n`) Z J`Pjj!DkBg|›<dDE[A줈*cLtWzRȓD.uж{8i۪?L0}}}$Ai}Wǻǧ8z+򝝠u` TVπV\ V Si|E(A8)jJ7 )n^7cblA J(P NKKnc@xyҤ1HsSuNx[H.nҌ^Sq4BXQl ϬxC1/ Щ7Eww!4z97ҽ/z:Ao}w`Jyȼ~ U2nR1H0aKgϾ0͏Nh@<qb0PЉ{&r8L^Fl+PCJUekJhSb)MJnlH_=%cO `c Jn: =57mr:*@o$EDyKd@}:k! JV$ۭUGVeHvʿA6M#æz{bu2n_2됳㟣7Ґ[jJ0›,Tzyꌘ-\XÇ?vrx{~,5yE7kߴVݢ=lS8D3NQ)&M Cq) O)g$rFEqa<?F!L4: <͐!i90m̈́QE{tgiϟ_Ti܍Kߌ#t k2n@R @M3SUQ0< +pp ;u 3;1,ppME-fг  `2l\xyiw3z%, yX/db^  Pa%CFSӑ1 >0a"MB!`&dꟌTtMϳdt7EVX'1O$Gyz}tT4~U6^s,HiŲB-hXPuEbX !A /"J7#PSR:2lZ HHBӷND Tz StRB7SJ`o'xD+ICЈo*Hq|T,<0BqlP啝ahklgqQ EB62NR'"gQf;4FHf)VTfʝ[qyP%<]6n 6'sjIrE$B\ 'Pgf9Ig=Jc+mERZ>&wa\+M'$0xa cP N?ҫ.q:yqUGM3n#7iVgsZ3U壮սҤC=}) ӡ՞gNgM0Vn݃߻՞g|gUZDsSsh9&݉:ewwVeyK83aq'y$JYhqYW s,jerZ(UjiNFm7 qiCNݭ{MSqW_rI=.xᳳ\< IXbcMDOUFvDZ/9Ḻ;&o]1DIQ*ɤD $2\@Kٲ/KƙJ5t~8 eCo=hn)7t?cnC(AbQ\̦ٴX{ D^}xf_uIz?n PФ^Ũ>Km?92FIB(vAWtϿ9mG4t䁽>-DMh ?lBjCӪ1?Vpqڇ**Z뾅V[8&f5},Wl. @!(*vXI8VE@=u>[$R*(^cCȽk e׷\{WQefA̓šQ ei۬N׀X`dZ;4͍qVٺcjɂ YoBL[yy8OnEc5Uk{+{g.m1|svӼӡmS\F6vC֞uv._O+UVwN  Yr,pPJKӭJVOP%q9|miiq$6-Z,CΪpL*z9Wt+E/R#H=ג}>sƥB}YdKyn}d8@sR;i;=gJb6и9dA/˗㦭=o>}d)P\ %ETF,k1` 9&A#QH_,6ZXT Yq$;k_D4$ef*PWA]r+;mhف +܉qwlk7 B0" ب + 1nEc,`RQkS=.z O?nuyWnP[,5'`QZsR0mģ=/&k]mѯ?vܤ&*@Ldϻ0j0Wf*ب}لc1oy̩(ۘ=ž= E=7OsAb=r`PHU{p\I:z׎CJz-MD..W)xzm^wI4 լ /TQ.5rw{uTa̕ q0k:i|).a2 h9 j <͜Τarͺs]).?"{,VY99_;ߋSeyIiKN\P*/TIKu%b%YqWu彩Ӝi{p!O[8 ox|lGW,Ϲ,X6NH`h!uG@X]Le&-&fT%׌|>ypD sO>7*x`Ҿ7CL >eם5Þ yO搋<վE-wAȰb-M Fg\pNhϨTEix?/?]ίtfwGʧWZYsSچ5^fKGuLSXUv9pG^ȥѻ:ɫRHP3R<uz[+K"z.Á=&+넝OsN07B+ja6>͒hLO\[TΓwtb{E3f1ܤZOtwdhii3םQ TSw 6EǫI i.ګq{Dm{(~1t>Y2_TYJ4+AOܔ~y寶bG3E-O!Py._S8 gx}QpcM܋"/&&{G&|&yEW2I&KSH=W }ރq{UuG\t+:L9ܿ39%>xldE 3H&|iJ&)O9ؾ=Hڊ}'}T^Kݻ6*͉ E1.M~tȧG fgMP ㄐK&|Q+?ʒ6%t5Xz rѲiy7L)׮;pXįPV2͐V ~{l63fיV _kD;_V@Ykhe6ӎz\81N%<тQJ(D s;;E7$jG"rj0⡖ÄXIČXI3҃D9XC+CUT*B#_"i)b]6AK@hI hlOn4k6?Bh5~1eJD -oMA%cR%5je Bcص hJ#.4j+fޱ£س_Wf߬7Ju 4lf2f;`k1tp:mazoo} 6+B˫G|=&a<جzm^&iNi0R3XRfD)׎8z`YIi&S 6i.^+P7? wH`$ii{ M4iMC3[VEalY\ӢˡǮ2f1wzq w 7o8wR4l/"іwFHҖt&Rik+'{.vTC 6}N?U lb[ 4n4%D=O3 "({Y!0sPVmƭҫ%5TGTWk1)9ԚEcA @Rc B6k w"E,-&y+IjW0ť0"OMt7imsO+Niݶ=M.hIpD0( HCI|BPVM {諡[IH,78.c,]l0 HADY%I7)+*bIsƐc] EumVFXU[j(!O3&P9$5TSE1TYo^pZU4NQV"j=7;v]%4-R[2)DPs  &EJ`TR{*P97O%pHV[c$w;Uhx:P2<彽J4и$ _gߨ8H*Z@O>BA ~)|@,0v9K3`& ;5POR44+h\M UK@qZ@,2wʬ,x^c8U2DfPa5H.pwо5\]c5$jYpDQQҴ SMH _@,<5mU(XбJ=ZNɛgդTQR]p2"5V8i,TGn_}P "/I6@pP˱h]g$7^R*̺UgC謻L )Fr-,B~#m6BK GlzRtǘ``S2>(l ۽LwbCQ{*r8#U]6=i:7jEe@QIvҜ2W^8 -e3WuI㫪iGnׂNa=\]=v9I[&Xoߪ3MsF3^8XԱ PBzM2HꑘӢ;4Ʀ(J`-ȿ2S|w'{ @Wl'!֞_vw7XC(r~G } iR t#Ƕs(ǭo$8Y1߆Ǹ3yHMϻ]h_ z?v,!EԗV:8`Yk?/̼w?]턓&Вf?>eyYiX=~U+I& ])9 G ' 80Tv~b j8&JPŪvdtHH31p¡ `coGůckKkFP؇ 64$g]"YcGFA|v|n;x9 ȋ\#=B\U V~Y272l\H!V6x\Հ!",fҷU 2Pq6nVS1$B݌KmmMKm8`H|^+hܪMf|^yv 4|Cԯ jsj[5oQ)u2{C|kXbwo(<:2n2Y=`t59$+64D3Xv5Z6m9 c g yVݰJMt^u5-ML3Z nֻF&T:4(Մ@'(H\+ 2`8Mgֲ 4~Td]$TR:G%O#,cD=͉5&{_bE|csl]Gf PE?T Q cSo!0ъAFM1<@0m0'p qUL"tvj3sl [?x>|)&o;dA=7J*I(J&hRd v1"BX 4֗/jU*>H\UN U5@I\.(Q+kkZ0aU7䁽>-nHDI('f)HJAK+B%*0h5t@ \s͸3pm*\lW|րӺD.z/״TJwG_$FR#3 ]_>>%Ԓ]{jćRɇ%RSS]Uq28D&ZZF="MF~Z(I&WwPTY ME4-^#4sCq{!@}9sA+LvDҬ+QQrؿ}Q#X'\7<,H|\0l6"\IJT&\&[gufaJv^l7)-Qͅ`LJ.bᆬ6D BAqR?f 06Ӹku،šFՃ; KҒ ZZE鄑ZÝ 40 A/Tvg'WST8JXDc, 0̻n|jAS[q =-Qahª0(Y . hGuаJ\KY%\yjr6YdXe#c1u^Q f16+YA1Eh1L>{xf-96Y(rJW+Wb<"!r*%v<\ĊƽA@d҇?ݨ7& ßO_dr7>?+KQo0n)(s.9ggSrVI7?ΓR/Ç8m2%yZ>ػ҃ N s!0(Ίw& d7GnFYH&"UZwMAygO ;a *>5I ٕ?A FΒk4H`#)7?X$]_ hj:$֞s2g4 :/& qpV_FԀF9gMx]q'RguI+"љi^o— n:dą],]ꏹIxY~򡏃r.I&ZI˸mHzz6%yb/$^_k 6ogOvb7u~Vcc5,B#Ղ6QDks4['g..јzS kq/ FpqU85vd.F6cuaV;O XgDh~s)'y̎܀6bUm"AN6uY36u> hd\8Xu72תlDtqLF9x2{O4A# aZ5 -71d—1C2zp~>~q'rqR(\'gM'5\l C9;,0֒i/ YޚQ:>{Ol醴_ kfU~ TT?I~PV7@|Hu;CWuJB0Ų/:Wq㻋Z<=NH( #@x5NR[p͒c&*%D.N؛|7YH{-puH^4YfwӭZduVgW=ybgQ48grR[ l/2y'dE4,5ܯ:ֹκ bԚd"jBT ֔NcB YU I䃈Fl]9e_2/"wtG LX=)Qy.]uͦ\ӺB7'E(Л56ޏ/Nfl#;eu hHBy5ݵݸMkEJe Ȫ4;ǟ>W`W&FۂzJ>%TrskS:תkDv~0(y>γ)cen3aRi l^NMX2t6:{(:l W+P{?d)J1.ik3>k}^N\ۗFS0éK1j rJU}/[(r'{[c"`YQq6Xb<|2N8CFA ԠjԘ`uo1Ɔюa Nh} b8A1`NPHV%5BCa0FVs!3?T4H}GS05h*H3*~ ,;JmPVМj`MOһHejN@L68KhNU6ædX7lR>7)W"c-q,P2T`y 4PWd- /q#>ɽ^zZguߡ Y8' uRy}tK.NO4؇ԭ*24O zk LLۆ2&_摧9h䩵!kP2Ov804b$DQ 8 (ͅZ<446IT%ůꐫa^#@HBw7úc?CUp=/=/\),:0׷qTMTD'@eD446yTk6̜ƳcѲpUj8q !OԠv̋^!@6Ȭc_l0= L&Xds/pg!0`hu`8Dh,4*N|!:xqĜDk-r\ZzFl;1 gH#WHiwILrL/.u?<5m UMDCݧw<܁%qkŧ_nvNq * 261RT 4LʀP)ڧ׷W<MkZe9 G>ہ=Sb׮ON4;`0 aqe+r*؃Hڢ 4Kt{57(]:7];uR"1)io6x 5) (>JUql9@'4  lc8cW9g#4'BB.':dW֒M4-;_:4jl`޸Crg  wa`*[yVS̍Q]G `xUg $#[vpH)dTsNV!U"ZeI=Ov˿/)[qQyŪjq#r#h`7~Eoeoeoeoy~",&zFj#b6ok^PVgR4%7*;%/U[<΋UO1/>7IEsю0'Л\ИʢƻX3b~anñ<"41KuvPL6T a .Ye6uӽtHatmꠝBwt[km *%@ћab{_3"Yt˒,ѱ͔DZE"<ʼnqY\b2;mXB- ;ݔ,F˫h۟?q_6"qΤHsr5+V}[PK(H3TB}ì˨|G46Q#02T (ikn#o!{rjc!Sk‹Z+~ϩa4}V@3  `G춫bPqqjLynG|)B2}m!Gt&lY"F‡!ӑFG wPp6OY廩B0(zD/ufmƴ ٽmjhOl)=̍%8߱oF_1 SN~-?/#K IpBd.]خ ,D]nlvk<i7Orz(!÷%>8W:_Ul8Ֆ­=ꍅr1Tr6}Eݗnݾv%n1ҫ-?/[|jUj&by>_e c.ot՗ˠr4,n(I4ۙ͢U鷿.y(g 5U=mW }88~ VnawIq$z_9kgC.뷵 r_Ig5(QX7eߒKN E! u#(BrM$D3h^Ȉ4K Hb/=m3T,e lL,`ʚnnV4ࣨǩ$ c\]`6z({rcnCQ0q Bk0aO'7RX4FEbM`Ud|7, /pq D쩈 ]$3JRGuul@k^%IZ|4"'~{i|=@2y5g,3 k w (53qv8pc+9207ab=/&<4* MMCYڳ#^Þe@ $Ð&I10f1R(&1V$TKrgmd#Yۢn퍐#dQh6kRB]J/ Wsl>1<SC 9@)l8vu9Hf8uxXO֜^HenIjos ޴{ b){  dIt4;H @ I{$X>hv$uԂh˳]8Uk/Ծt Pe,Vhg< f1͔bTi啤 /=ՅMM@PѾI}kH.T纃mW.8eIy)N @$P @aszn^ئ~XIT7c^ۧ)>XK}7u6SCtaa|ΫF_pqf4uI 41"β81 ;[Ud, .[~x&O+ ÑvH($ֺ'YJ`Ƥ`pd4ub(1tJݍpV6=fcmV=7 *RI /$aąJkQB"Sfy_wҊ4Vэf@4iQgfPljN204ց1n@!(VZʴxz&Sk ^mG_<1Д G]Xz55X`9ƷBx`s? ѣ~Mo csl%A@qASb)J.-~JL>jN !W(o<#{ DfƞD @5XCe*P8ZL&q U᣶Im~34îƤapq utEjpzm_}^jFū\c.3}>,1 P7'f)Dqh& PR( mFPbJRkgCdV?D3hUWBz䏓5sgV/õ0&&dP@uF̬!ZfH18 LMF`cc+{p* |v{SO3#LJ8> DY7z2]#2giOv,6z4ѧuUΝ&{h'wjtxoX9&zQש6n3zwW7[)=HO&:޵+ڕ;DA\^YUٵH yQl^ W&B2 ; #koݍjrj*t :vCёF=c ;3+a/|r" mRP XsRJty:nK|>.xd p-:ýu#(rF=5axT踾b ;+m_alVJd:D>3)QAvQݲp }HŅNr!" ^tMyPyju ^ '{لR (8io hp=5:ݫAR(W]R*O0*X&]$#9P?`z׵Է2 {D5Gm4F)~5Ubc^ν@󛌪>ĒzY*sq`NgF(yzAXæON0mo qk!Xw\,3ڭ?_|.߾=$#LM3l5 {e}A'8m24)J H\v:ʫtTh\e_ eMK|N kE(.r|/$2۫*2Z 3s\«R sD=ָ Ns{_XDKa]O.wy|yς!W1V)N.mc+3(7MqLWK[e:-P@JY<*HjsO vB%k`툶/@}*5( hb x)5cu\dgP}.:D^Bpv A#%~VfqeVE 촣>H44hCei L;S%>JXynP6@_6@*b(&PМ+HNt-MB0e=Q|Etb*NBV*XfuY[PW8`BB2>lS!a]Yay*dvQ#qw{.Q`Ig/<]8 謇+ jaÈt¤֗:`EOb2ruMMP}0FY0Ӛ50Nk3fgLavL39 2A1&022a4^P=DpR+?P 3}|qW@].$bkceӁ"pt7v:+ ,eM 7P:C.f"Kߴ$/]̴_Z00!dw @SnnIb~!{*Z6z:(}h&50#kC'N i5!*tllxGfRKzs b pq;}1Pcߍ] _VAxˇ|>v}dX{P5oe_oW̷+kY|ݚ<ߢd<2cGH&v!c/m + F(j"#MhUgVhpts06Pmt+ #N@t~Da<ٻB |+fҝ+m_Xa iP`53X e&oڶBjsV{5A!.Cl)n|A#ێ2`t0vW\ʡvT :T/uTDmd0$B>\DrG /N|9K1 x ܺ[vA>h?6BwD;c3_aUM̄mcf*w*?&Z1z% n-&6z?̪~wYhbڲg&yMHxJ>lEd^&RX ڞiN:0Z)(\#J& C$̘S¼ʲ0P/c~u$jA 2Թ8w_׎Dh1d-=[+~ zvaOjp37a1c/Y pk~&h}G e#kؘHףfk w66yP`ԑYc}:E_S>;B˿ yz 7ɘsAA&7?OrPy0Ev` 30[(B(m?Au"xLHU\Sqƈ&p\!G}A73 s|0D&0 1mbͮAgdR\j}#h=P9R|َn|Xb[4RCUlP ,z9eZfShM3hoM^ kOl/MѲ69aAF&{!AnNp?@QBp'JRGp\_rs{3y2SO% K9fD:?mBdJ8L48wmJ{{9 v(Т)؍8CYeYr}ıO$tt9<3 -㴵 $ء`8ҳ-p^?:[|g-XG[^7֪WY5 ,%6a=E˼K?X7r6/:2`Qfwd?߼>;g̙5x$3%͚-#HN9_n6{n6BHvgNޜU8<"jы=!˵ڍTo"lF'}Քjh\Ngt'pa3Hg'Zӂ+Zp}>-.z].Kxo3l4nɧ -§W)ƍMV7lx't3ILkܼqre-ٓW"7<8672Mj^ flYVu>vȰIiضôX`[Ǘ qޒ _,VddKm\;ӏv,x_Ep>+^l3yHdܼ5 TdVwщ6Jg$Z"yj"8W\^rT45xyf"a+:3PCknhMbϟNa)@>XE[rG|!ſvNtѱ0vKi+_<.roAHt܇"|A> *s0<'ejMdk V"4C(=MFo"i XlU60b҅RTb=ZtEQX{o6GH^_D\?vB`+^zCufĮXJؐ c\YGH$OBRkhP^yew\SQhnoדc_^󌧚LڱiV=ﭘ[8Pry:襄Z~S4E1ծW@mCn `q{uUZ("ǜm.K6%}^TM]k 8A0j/! MT+fmC2:xg\ή$ǜ BsOMLȅؒ^[|2hmхBtbD_-U-hy!vL:S')Nhc>BfLS'n}H^7r Yl3ըT[p(ƹo:u,UĖUW;m.}j2 7?I5!xʠXZCJ>@IllrC=tgwIgUnz3@irl;,~_lLo{7=ē%.^!TѶ%@.0u %F'Bʈي5c b{Gnj10søGD5|hT^Z.eQe˛P0$ݶ`GK.dkۑɊo}st7:0캳C] K>jĦ1Dj}'"JVi?*RCfU! v@T,ReB'@c%R%p 4i_l+c` b*[U _\X!Tmٜ4+N3f9ASdroga jV-cKH楀 HA ;K$(9PDI(HyЈifb옃[NP OŌ~gfO/[|^QxR&;?(VntsP43 .#%NI>i:,_$:)ܹIؽ]||1<;f<乿dΦm/U 喢_WH5ϙr*|5"ybv kz@Om4Cukt!7ڄẂ$wUJ@, :1µ.i=QѤu;`Wř)/]PPr5ђ|.C,QFcCTc9Ny4'F3 P-1jP\?ͤ45g6}]5l9[$ѳ1K.^vHxX~"^e0g܋{ ڢ}0A:9xEU7%[<3&I$4lcu |8rmy8ɖ(j[YԮ{&`&Qdv'/g6~/{)7p3g+}г ~լ顜gka H諵]<#ppA4kGuHf 깤ekr uJpHPj \1%sE8G4E\R{5uXsexXXLg.B9 qU짡&#Q?DMTNobV1%1dBh1.ԍz?Y : P$@ ΪPQ|uN>Ƅd㫆U(`$22$vYq")efI{7pz8KǎJՕ}'hW9!,G클3kYֆbԊF- G$Bɐm/حF˃_LjVx>O+fF`&ccbbfljdH5{˰pyixsܲ+sѶbhYP]W0vpvbi|.JC°jI%PgE26%WnԴxi(:ګ(Dv'ufHC'; KβSn@qdZOQHb< =mW݆+˩-yDغ']p{I_gRŬJIy7=! `*Ƥ[DWzV6z[>Yi9 ddޚ/U}gi{bsBgX}BË>`dOzz⾶4/aC+T7OStF}Px{";ቫ޷yo&k @?JMO΃TOqnl̯\0HjWq)%mf޸?tq 2}y..؁7fo!zFD^xsbgI.(989}??<bG>n3Qjwp:~=SGGr}͗h =6s3S{^Pr= DpV/V}EW35>)_#6 < `ky==_Q'J%|7 reN~d_L#78~Z6 =[\9#^&PEͶQ| [8,H`xݪ5(A&c-(Gu֭! ]Q%o#RFi5u>:mWYγw gaqHO\?)͂^gwk+uz~uŹ4ruGklݖp{*jbTi!^}󌽉z>!"Oz'J ^ȅ2򓙏=wiz`[oDt ?9S;~IBݴkӂ+ `Eӭm0-xGG+{QX wR;WW+ 2NxJ^A7>62봫 ]+w7ǣ]RFlR\ E Tv{FiHrb"yf3ڡY UZ=Y9/ӏfhoSz.n1m0rg&b7=Ԁъcnf2Z3ų̴6̼E9fszu߇om[;Ѣ-`v@DZ.PW{ĺ+vUEs\(KHlV"h6Ex T v }Р]C~ئKgCW9ue6oq\^h:!պDK؟wu\Qr=lv/ۗAƗw{ޖ:dgJ"('4ZŵHBa¦Es6&gJE'[ko4ADЇ_COO)7[5"ĔSB@:Ff>UqFeEsٳ!ڂ{(Ra_p`rmF1m E Yod1RT{z}~C4H }_kLBc8#&Si&ͭjMH~ϷV11~Ŝ0b7WIGByQ5pf 4K pgZVTW7h_d"Z 5"%gMJQkUh,}gR>Lz=>,VLOV o056 !4CNЮ?bU-v.SW6Tp9yYlM޹6)Q&CQj,=8yE dИww,Rk^yٻe}(@nt݉wpĞi6k;maZH DO8 s{>7<6QhcV=oZhՄY^YHQZ]RDu.qt5j BSKmK,N8sa[UwkɨSt;'b%KBR޲@n!{]rmTpT\Y->(~n`bVE@EzRV,%E\Wyդ>>C6Ř- T<#(t ?,jEWոHc*#+t?+F=)F!_>eXO}uSrQxD6- b`(.5x͛}N7gz<; +kVp*n/aܡr>}a##vHvteiQ x?(߄Bw15,=J_"C0#LALeRg%,]8~p6e*wrAROD^ZY"Xɍ}Dm]GfHհrc݋khM=`#7NlЍX.P7fZs],96$AHϨp:fMbC-%I"[:݅zOo?0ay[~座gQ*u~]7rp4W[y(MINܓ[|o)ǰVcf3՘l56+9ג NYxL׭GfEF&#;&S\.)uL+ۉd@o{BQos}WP76t5"%"ՓP_Z9 uG!\?XORӰ|nQǏU%hiԧg1 0JlE8w@2PJǤ)tUKQQr0[pݷCmJ :')*CrV7ry^p4 ㇳ%" wWG^tQ?M?O^)bF^;_;=F;ۯjF]UUi~mc\jdR+ӈ^ҕtWg8{DqրrynO cg~vCQȩC搧YnX%P0Ffǝ$apLJH5χ1ua*Ke[ISF)oQ$$AR; 1=gFF߮W&772v0P-l2"ܚS*uJD \^Q_Sx=WrR n;HR}5CI'<.K`csv$j,&vQ̒tZ gWMWی+)sG|N?.&jEDݞEӜ>y ^ot]|d¡}`!L{{ͳqp0qpE:b;IN3xgh9A/AheOhVwN\|tJqXSܽ'9h" =EԒz=J6=CkI*# `T#sIr➢~(@k+u=Y<0M+ޯٺ5[7i:[rBb^؂ 1:d =`Xb a 1Q<ny6Lm` oP'm8vNtF-Q'ڨxUЗk ! tD4(FjˌrtcʼnINqq7b%jxu刼F7He"25F7&움s}V̖Hͷq5HhZgȚ CBHˋLǯ컸&Lv S1 @k%!v.KƄwM K"|}|Y;f> /Ë'0.xyFa o!R݆"C`k[hҾA]Wv#7eTk gG+=o0D+Z. ̜Qw Hev+n$76㗋/?j[ <TПd?9:=ѹWϞkE2H* Qڋ<`bђ+={AdVzp̍0sǯU L7fhBoEoilXS&O8:k׺=`@'l} 4Sқ€JGƏn7jtȻj"%둓a.H@s -w7kC[,BmUcD[!X1A\o1B,ixFe)&Lޝ!U(h眣1dL*dKꞜ!¥6yA_Jǐ>c`R @"YC?k|ʕپ-yկ^sT6FCb֣{2vܥ[6W?"WFv6m0}b^ruYcOi }m/@hٞ5i2)s>~}Kf pW.-]68dŶBζC(lЂbYSYu2ɢ}?wʡf_.G$REA`=:SzNі@6֜DnXU!vJuj]1:Eڢ_Kt]}6U*4)+lF҄}c+#vԙˬEh#b|`.5ϤOp;c\t~r^!j;gnILFh5܀qa)ڔ:%+ny s c;d ԲǢ2C+$O TM>:T2ŅWa"w҃?QӳʻS P8Myx9L; z4=GMoezohb.(cc;E?idٟg[洊.r3aF144tJ+Y-͖ D(WgͮeY4aq9eo˸Q)2??v5nz0F+UfIJMɄǦ%e Ş9eX[b;[DB'}~jHbk gJ&U` x+ɉ{2%^$|yP/è* a'X>! 6gOA:QxXP wˆ{9+n"AH{ ԫ&̮<f&(10qAM*zUcqɍ'K=2?IZ,h.|P B*28TM5KpEn02K@WS6L![(u_R.eǾ'O>-#E,/T,z)B!ݣ[=WRA;zr /36ZM]2ӹpxorbQMz# UW.2WUc@=v뱹)BԅZpA PTX˸c =-G]JiΞ1ɽBԷ8k^M^V5S=W3@J* h!3]^ r|PqpM+R赶"(A9^aV@6)WG9jL+a[F(j>8ڄgeMτ8rMP2";:KE}u5($x?;Z%9QʵjxY_Q%ɬmPHAIX4]<{ q\ C۝G5btS s7żAC`qq}dXF+阌,{p;~(9;';9=@Oߣ+;xY 1O(e]Z3E,7ܾߙX4ruwBR2\5n`6L L*LyZK8e}BlwF7 b) dםYyGQH,j#|{>.e#I3۫1Ol8A4FN&7~&HdabTpǩYO멞aNq ͘E*Uœe6ٔ¹H.12 (Jdl{+msUK6H+# Z<Pۖ9dz. _c1Nsqܠ>t 8J&7v/kIN=3xq3 X\V>yy;a>_Z3b5(QV./[d\0|jfqdxnhf.9+א򿚱t=-9}bfX*\&4I^a,7|̨=nXr|IoޑJrM|f0K5V :ta F5 Mv;TRr-8"[bm&.KhB[spGd%۹][\(KS]ζ*ȋY:GjIndXHx}˿&luΟfj3i`+x}&lq:ϢTEw G: *Q@rOGaJqtouv`Fo3*e.0U;4s`b^kkrPsecЏ"r, 4>F.hbSlF/k? 0ucu";@xz_"EF0&nð07z`/>e%O9ӳ׆R/K:vcZ5aH yvnFb7),IX9wjAj\f4VRDžse-5ur%bц%6&1|towa}c\y2HD'ȱLdǣHJS _(3;=x~kNc|IfEYQMʇUQl"M4ucμε}%3 ]FE $h9J>RnzDnA@̾c<)rA xrq-ɉgV796>o귟QQI ZjD3KxRco堇Q9h9= 6vXnun#?d/*,n)s="J써]3BΚZERhJ#|bIbr62 (9Y1CT!v$.zfЭn4X.`| čz00rcʿA$@v2Řƻz}݌zaLe,Q)αF 4Z;_oSNO<,sCnv[ˁFA-6}tmvR=g/9ܲ% @0k@΅b ʍTyշXyyH^rpo?0G(&$O$[ǟo7s4ZBC0%[lK Z)HuvRmF}!b*>6 [;g}KK43d6.bSFo1fEG;kIn?3ю;/T߽9_ELZ",@,l6.)j4g|Rae0yW^@KwEM1hcN-SNQٰ͐y4ed.tj7~kӗ̊3}>h_XA| ɸBKf>ק(o^P*rټ;?*D.#չ5I6W@5ȝ¸d'QT@ʔ"VG&lK]6KҔ1h5psBqvHv[Dz a a\5gηv < Ψ Ɔ\N? 2@#?7@栶䪁K!8ӔW?+jn)@g ZF&AyhR]Cl -k{>oΉB(; ^`~yӍYh3M~A>&{<^}̾1O9l(SD Gߐ3 en9"|$GzrDC."MV1FZ#)m{٠md{Gi 9ت`R/!%R9jA.4ޜr1O 8N)DaݨH|? Hrʡ׏=Bj…/k(.HJ m nԴGZ…zB~Lf{s"AgT> jVw;,at"y4nrRQ؉#aAv o 2xV3V.`S~~ao궹ov-/[Ak`h T7v~ .AgoN!Aܑ$a=?/姓?҇7+$s96o@[L{AuEDD` 1.8t~˭JI}kz_M6T够ĉ_8O/{'_vv)CNʧmFc.O~k}Kdwq{q~-Ӄ"KcdD }$ TC7셁~ oJŊ p΂:F*0%~aNSmT092aNnԭm!kYqItZ^;ljMzS<떥۟9UQԵX{ݰ@QWSH+!eK#B ] d}ҲU [Սβʁo-=JxZs"=>n8eXɌ:7#ill}W Zsl(Ps=EH mˡON<GpO_ıP.RThbwO1t砐N>vUl]5z~>(㝶L)zS5oIژq^ iVRcn(\)Hf A=Q%lY2T?o0޵u$Be/cwW@;X>f ʒ"JAVS#&yD;``9]UWUIԜfd6115Uhb[kDf۬"6 *d'"@P"-r,~^Y_p (.-4,d;*n']~K3EEa`c֪VAʒ֮4#9 D:H"À)nM@-X; BiFʉ#!GOi ƫ—Z]SKO,Pr97A"'x络#_m,zڵF`ٳ{PpyA%|ِR JKd&oB|FqBFi6A&6xvsZW[HK'xza̎R&Ҙ[ I3,dfR2[Uieɭ搷v_lhвA#B1'/'X!s6Bw|nbV7 1U? CRF\ѹLZ#ه  g2@vrgHĂ[%rg )S)A'~[߫tcؒit Oa gisLxI]EaۉYL6g6H#/(IJ+ԧ>:ȶ|X]”Z,Mj\JoGX\#GRւ'xS1R볧l);p.Es&HdI'tYc0΄&씛1KkJ֯$ 7Lq,+1'-)&| dKCYRȾPC@S'Z +4} `ޱFHi܂*\|U}E?n-W:'yu KeMFnPR̽Rwu-\${P]}IkSKOcutD+JŰT) 0'Fm42 @:k ˕d K9l4r;x2AȮw5:Lϭ WdK1R jxkx+jx4 sRA,@`kxTã|SP9c5jsPWSrqHԮRЧA)\mmRF!O]б+˫q;Rc|9=;qGeNvbKwğp ^5*ـ"FZFCSzi۱ŋR]?%\npV~j'UWߋo om>͊g{r I"_={4=zQNr '~=&1JQ{sY-17LP֢mQ5՞Gl]} @gL墲6D0XfZ 0{̖|uh@D)MYsW?==Z,I(svmMPzSˬ:$O)̝L0_;I)ߤ+)rܜ{cA͚ ")Zc):1rD٤Z%ZP.<](6W`S?]#5 oj 5oy5:ޕ%-{?MiiA vS`|q{Ɨo[ރ\%JY01/Bۊ(بo8Ы-ǏơTTly|+Q |jkz8--!煮qpӜVSтguY%0ھ>Nxx=['ae*7fG=ݖy(Gj^浗lp`͖ҬO׸ݛgHm/\kӬyR)=xH|LJNΩ~ϫm)̬Lwe%TZn>FQ=&Y*F9 ('KΣӡ^-ٯ Q<^'E#FjH7%_b4fϩ:Me:+O#a@%^t4<gjsB%'mJye퇬EZV3T&ۗʟ.h%-x+Q}8~\IQ~HIY:3mH yIm=RGJϕ.fo8Pc^//9D fAK`;\3 w:>zCm,ae@" ZoG|FÜZ 3KTC#X,eΑKiDJ{rNrFeb3|{^u?i 4|aە"CփU/V(s.}.;@C:\QeR:Hz_W`Eb3 nEM'x'i5H!jgbk ҇؝ 9LZOF?S&ìn c={L Kr0G ]ockv%?tȎ@KHLgI5^It[oMLzנؕ–! u\^]e @15R(mM/V4 ,w5njlX]PRhZjDF*m=iJU!ΪÜZhZtu=)ޑů٬(O1?]M9C4;FS/ɶp9GDߩ)(Rkhy!*V6=dv&p4W?1 ;) fks46#H L-80_TT蛓'G7?ѻoGes{N,?(5ޑ aAcN#`ezκ@z]@ԠrσJ`!W <ֹ:`ˡc~:μZczʀz >0d^ n% ̀&zg D̶& A -&X^ECC!bN<_ホFwgtܨa&d!I!Rs\jv5;HTֵSJ]6*dDR\94ARHC5[IVH jKA=mtN[0}}\R_f*yZ09nG]铅R0,lO/9v2z6 nיNSbAyca8MevpE7ЈZ]f+BT$v-\2*; j)I- vR꠨Va# jҮR%'Ak*j갦=9Rm`fԮRdvNx+CvW, QJ*[}BSz6zú㠮kفLz]ג1TnjJxT`c`-{55|3lzx!h^cvU1\_$??_u<+E1y78Go˷ۛMSכD%jg}gF+DW9zg * T=lv) TlVmnPQdDTd-çWV TJ+4m8Q5X_t1)-,тMҒ*kt7v$N,?P\ghQgI{Ը+/^KKVŞH؊Dzk71߹U[SP n(rs==Kt&DJ,B!JJCBQGBpZ1FӾb/VHW "pf,USIT=3^k2 c 4;14k]և*d`6TճZ%LpI3f@ZX#7JJ#u Y`w s<ye$D˩V69M(D46J/HFʀm{XF$LLW"H{J$D)´Gn?wq<ɶ )4F"I!,R!,Ϫ@\gH5 wQD!:ع." >E+c* ?!5#rVVOMY]F)h-K#=Nì3c\g Ľ p%eD*MmI3>{Ali)9?JTɲja73(N-Mhԭiz8d%aw ~27)f*wpl:=(> >ǐŐ E8jF"3y"^'J\Hױt)oar:̥['iw5-@P\qrV6AFp >9W^zPfa.Gdk֘r:sT8`˞@7mr-vFoV1 bGyUz] _#2nad1PYZ͏zp/ʑƙ&8ۇ}G?8Wo>tAeJwo&ߥ_zrfhlr)y_^ڪ!hYCoGuk|ON-ҽ[X66NH.HR5Y,sR2;S6`RB^w;4JvV }j $nqL5O~)z2f7Lq^M/g*5̙_|VfaR(?޼y51!+)UkjM6//$}݌OlAqz>`g9(O? a?wEM0~&t +3 $WطoleTM23uEXy3멀|`7Q ^JdꖳO6kag 񴷥cdqoReћIIHD<t(}Y )#Ag9 LG3U2Xl^x!*Pv l}P:j%sR$zR =iڛВ4Ƭwx ZLm('XG.vؐOO8` YET!8kQ8[aho3zWxY LēQR_Мj3SOZEcIH$j .sގ@p^XE䝔LI䝔Lީ&O&p cLyi^Q DGDCD ʤk5` 9=ua]  }l5]x-X*Jq D[X͙ŚE٢C7^|;be Z( rgmm 0$';ʔ֔t<&g?ܖ/;$RL刺f+5"  qhMG*mĘTD8$R1[@S~ސkݪ ]Znc{ͦI:2mrAY0F;w(ٙDH٤R!eEԌ].&eӊ4%ewbACv~^MǯWIG}@E%2ZИC M]f(6\ӈ"hԓPTU:.=H ZTr"f&]ک5#8vʆ. ^@MNWwmRҥlFd6}w$ ΣNsZ}Õ|h(H*h!B)Rcµ 0HﴉSVJΝZ %Ѩf4޹3m6_lasHpD^v/(BN@i25_/<#2M(Mx>ЮȋI~L໻ډK]sm^4!}YYo oz&OL@k6bo>F ,MBrjNחqŜcAj@Bc2ɕ* ˚gU-rC XeK,)fY(a(un/z + &ZDZ(T^0>'Z3@XO99 *pڄ!aFU Y66ZP.L xTR05;Z6"XrF>sNۙ:W!rn (\: Hރ)C%#2.Lx*QldÖdJ:`RpjPQ_:0rAa$X+8y>V:bN3tPuw)_J2K&Lz7' ln~tPVWt1G+T PW )t}nZ'x 0M I+6[r(/}{=4f gQQ1Yf9qG (/BYѦ(Wizta  I:~|b䪮[mWb@{G @sEspg&z|7 n_ze[uɐwJl3WW1zYlyXiݸFIol4ѡ"2',K:\/./ lnUкv.xv)țC!nAJ_s9m`'5޻nԟ6n. Z fa))c&($*,Qٚ5ֺ -.Q $qANoک Sq uR|ȡUo0QPZo~~ݺ.1$<ZX ,&+*bW>yaT6 q7v]+#5_)OVD"|DuOB'K}tO*${gӳR+Lcx{b\kt67_.$=ِu7\mVks-…{HfpF+D]4pmѥL[E0p|d0uC5k=Rݦ5¥#38)(ёj)9pE8 cυdT %&BՄ!h??+?aց!P5;`34C\r]`^Iv\a9 D`zP&`|TgTaA-)"#Txe)$'|i2o()~^]1XƩFѹQV?{Fr _ iA ;d!բ-QZ zHIC{83$m3@9.UdA DM<(c/wbQzgEgKzu]kfYS5j ԮV&,= !P(GX{J;A`L(R 2ʨ)8ﱍָCղ]> ӤTMmR$W3ȚZ$WLi5F{R9D2z?Y5Qw L U Y(QJҬ_9&in4e$M*IPcV8YtDEK˽1`VNX5%1( ZVsgJ"<(5:*>5b_a8T-+U@uҵ6K}ݦ@9!/_D$]@;ڕ>i* Խy&Di`vn:3/Mlw77n^ep:y4Z}qf7+>q0{~~/{y O`7, j1K~=>#9߫ ouZ1lvY?!O9A~AL3AީMinrbz6d浤>vzA,Ḯt/cT ZS=q a:*@TY8YYdE =eMW0߷S|7imB=,~!qgalgfFI+c>kV|*^32$L~zd2] 5ΎaB@}oO]?ܸ0Z@ SY$j3(Z# Қ aoAQz=i;* 獕.>Ŕ N{GZE.9~+Sig|m~}ߦ>xw'c_-.2#zNn.5UpO !KE;C0n 36g>c/%#|4}_Y`t5~3>]̻!^~}'~V@|xȎfY9{,@@o>&A '}Š ޓ Y"JY˭p8p!`6EHX Tctx8I17d!8sљO`N( *f> # DG bA%U٪DmZGU vrGC42Kd//.a"(L⪖fU\#nC:1.X ^ef 01!g]'Kp3f5 ?/B&U!z?ڕRD9 )"BaB Z3ƽCDU"%I7]|pYsJZTݿ= C?9[?b+.n(];%(kRJQ--"YtH[-Nsњ1V4D{F$T0斀vf󹃚h'3Tdxk(L.gҡ,UW1H)7/bSKN`sl ~@fď C1y9ef|݉gS>[4c鲑FRM^JkV/y)pKSim\-- ifœb8~+h,b`ewvXQ !@~_à s_?@M~x>m`u1ZSy5նnҌz'ն6iHS,&zT4R-(.NX,7b{k^P!x.< hOiz^ªzZ?=/j3kB/ m.EdZ⣳<|0͔1\Qud!h2 QܲGyXvm$^A¿M[*,M^9.@pwya9o4{Nkֺ~!~3j"C =B֕ފZG:*dl#}rY #w-QguDb0koU#ԀB]Y9O (އ6PALh;p{5v'kEioh)M~}$?%=،9w%UK%GqmS֌qwX^x[+Mn618^Le\%h;}ጼ-Y{ccműriJu0L-%ҟE*r^I2,DCƊ8;Hq ";)SAV+z\ EFml㨮5Vڎ6L^hq>s_)H;/9pe[s^ܓB{]Zn@A恾us :^WO^> 9[IY%+iwm%]foJڝJMZM|7Or.zovH?sz]tKo_In{Ņva6k `w;W|,v3?Q.֙wiAgkҷ 'J, (`^ )A)B'r̉RAXmWdw^bw5=f5EsF 5&9MF,X 2䘵8X@+gZFocG☱nj.peRfoMQsx_~W1Gl7vm|"$ +6% "to5+jfD Z ]ޣ_>]r5c|H[} IU\aE3dv=ҲyQ^\^G9iE$t7LC},5Kv+D[jzowVvD+L{Űt>^TV7{ NIv?p%? T XO= }s[6uO7:BHU=XFy:lJs8/SZczLc# /D1;4>9.bZMi$P wq4GXEi":he2X3G{#P`Utto7y1Hֹb;nu e_N8Q7Pq_{3V 3)wۣ;07?=xzJ72#̚9 !re |^35*|^?_ ^ Wi8 M}mz1xWA=[ Vߴs0AӋ0|kQׂq#,6̚dh3=7Es`r솊˓Ov|v9tgϲ9}҂xN’_y5]Rl[oJM)q7%uŖ䰪ĭѦ\c就D^度 h IUJ|{*Y:/o~y^b=q̮c\fxÅf6A2L/ /{'@#TBaEQ}I@xY|N-|#s%/ Cĩh GÒ q'+.?[PFa~"KpKx}ܡupp4\بSD 'vN!A(+Tחa1I`]s m`d. '@M_6,RُKm%wO%pAr 1J*mC*w!J8 ,ARe1raE1H*$@F$%ĉÜJa4H,z'j*u-3H:"U\v=G L)gH{B v́Vy! Mk :hBRF3)ݲU+b&)je)2,4z,la\_j#RF.c56$Qpi=r6YXāA-%Q 1ZƆ Y0Ǘ1,Sk8xpVq 6a=xiVD6)d(e\$Z0{B(rew o R@qV!+Gͨz(ud\y, #p~!xx^¬a{#A0x%(hf,0bb|$IEۤZsSyXvDR"O-T'҄a8骮*qo%2 S0Q &̨4:ùOVdF8Ϙ!Ȍ*jL>%PvFzBibpIU@3UH |Y/38nLen [p5>c2EsIy k7o~D$o}&K"3u~8C XRђj#ӒZRiY{=(\Ɍ0TU>A'͇dښD9K:1Htb1egX \`'֕9 IB,c^PNJ(l*ia,#V9hD&a];*H)eq&cDIR Ӊ.zR9)]@q'&QҔA09LuQ:iJ= 8^E>Ȳb1r!jȗw9@w p)t/k|gi:)V(\fs%px bzw7\?Ӈ oL膔9\eDAɌk,7UU4NYa()UO@\x_z"M^dKg%Es}tYiYE8"Uk?&bubۙq;("OF,t<1Db(annT˸JJ,F"R Ss()<Ԕ4-Uh~0a1q_Yg=fǞjfmZYaFxCKi| WjR8PJ7$-43,+ *iiQpgx-.::3)h Ҧ.&L}@OjԟweX"^R8"F_cVS*y$JU01,eO^2%.2pܔEɢد2g-Is))$8_[.q49-KΦ$&RkT|1W-q:w>?4-܏YOc/}u%U?.z)+2lk f+80Mw3]SJMjt?w?wW~6_,ɛƴ;|gIۃۗ-͘>o upFQj<'+{\ ^0`(OVp~WFmgJ ХY\mL F=AT/с9 )J.2 ՊX V@Xr'$qon\!K,,8Ȃ>1iqz.1BCO'N}d,! J ØYpC}p(I$xL,c4oӢeJjz$`L$纐.0ft`,0ė ͙)VYB % bqĸ_/u#:ń:4[!R[s)J %?<oBW1%4Ï"N:T}IǹH)AYfl¨dT'~_{@БLϋq,L q?ɴ1)USP\Z8cQES6ߘd\R.Ȩvp.jb%n:L  V` ž.ՆGYSFjF8tvmSz" f2{+ FHI4Aʕ%q`yj}ѥԑ$FD9uQލV6 Q4~c{}{5뻘؞}+  3'Gh KRTO, *M.l!v\ܖJ邦Qj8q|{ȕǢ'|vCM'C+0rh\bf09uE4iE % WlEa 풷mcN51khy4nMp-TLx> R]ĶA}t=B=KIw@eV{;Rd ѶDP-Cl)&EHSY]Jɢ;T("~ ۬İf|I!i7aOmT i`L@)&zLBi]m['l:˷ni+ tpBL经9"wv,l; #T.r/Ql`%v%o8'rNj8޷R' .' -)ΖRզz4PTt[ bcԙ5T1_EGĬϟImZjʔb2UDK)7E—I~%VUYVjZU:_DQ]f$͘*UVSyY(ei /j"O=Pva!U!aG2N?-=j-=x{'/0,qZ C ahj!:dT1wp 5Џvw!nNKA34+jxTs>v+w?$5.m*V*ty\F}zTS~}#M+0x}>>K;ɾh6{z,8\VO/lQkf_첲_kWU.+ۖr y y}R&[w; @z/*3S-'3s-u+)\QxtфdDpLMwMsňE&1@9; x PK:K)c6 p+Ι֚C}t'?]0 . WKmEepݬ?Ln>JJn~A G$ 3;q,oz;xTdݩ6;>ڜ#$X?A~=ߝWu_ZK^N%U#}Pz ׍ByH*Ystq8j%9^@t]<}t>H-xB:@0qx~yϛ914(q%[u펊2 ;zӅvRj17J140)Zù=ߣI(['ZK]*=/l]`Y'cFgpFcĔXr뫲G1LG:O֩!RK(>Q}lJ !y" rSJƂl)!i-c,i:pqqnH?':cx;S7U {|~aAВ< 1Ysbmf9`!٤Dv#x*r{Mɥjk#Yٞ3r&鑚s 6{ExLPb4AreH4̔8  c#ɿB]nr;?wd|J`b"K I%1WC1Igzf8dC5g]Z<80bLt1hlbxՇ%z8j5N5:06cVt3J gX_F!? x^|_YD瓯9|6{EI+.F!\ Rk+TJt^Ё0vHРzOm[нhp'3zpwkVip#Ճ;-۷1|-RTFF+vp;Za">/)[׫W۷qTgu!K@5jQ7_u4˂+IWb !4S/JL)% U#*P%پ;IF~Ytrkjc)3UB0mzJuy`;josWbmuL蒈ϱP_et;eQwƱ8PYD2$A$hpݖqֈ AQS,KPȜC\L=Hi|Ҧ^owH,nntg\G[Ex0jG"z'L4=,Gݳ1S i4ʝ"ܽdgD5GJ,yjOV/8V>T8Z5:OC`WUi͑3h pad͆e יcB5ݛN_֌${϶$y dی(yO{ HF7ږ/"ze̽ `s"χ)"؝\I̯{jW`}_FL|8aMW6$ne6 T'W^1қj$9[\@Žԛ ! ;AYWzїyߣ?]F\}r7q1*BpmE0Q3J.Y!*Ojrdt?N/C /'4^0eRB]M N}}'Y13ԤI"5)Cݟ,(+4ĚUL&|*1#,ФK P;~_ _ϖk'b/ 0kz>jF$j8dNx"(T$Q Bu$Ue87S塷=$Zdq&ۇ6=ưns|9R<[04(}z@_VqOrL#~>^Xr?h=kV$C\HX_7Yjɱ[SkE ~B~}t_z~n 'rŽ~ͷ?jgOzQҌq(pܗ1GqPTb܋Im $ʄeף$%\!;7E2*`tլ ޯs8ѻE[YWՍI5K<}}w;u6s_I\W>*^nϋ=|^*`9O3*LNPǦShZݧյvgW׫0*;i'渙 f'Ȱ[B`J3ċ`OMr hhmh|?R%7N3W@eJƊRdkgZ!xbpg@8]D@# V`fɥxxNݏn[`yD# p-a(I{rhRM W&ky 4C1WO^.DZ׆HNSDFFwZh(Gŧ ʧ:Ya$\ŎZi;_/\ZV~}T9[yu Ԍ6lJ%g E$y#8:.z1<%%!1`H8.Ѩ+c EXp*V'wꔱI#r`Qʴ)ëPw&} X;֮`F3%RMKJ*w[2IZ qWoo^󚯶[Ϗj G ƫ}3%);$jeO>Ÿτ/W9߯ ssq_jVQKjd뿡cp |4.ϼAg*_'ti7 es+! 'lvBݾ8-9D!= Cn{@gOuڦcjD+:XF%M}9RQkxT\*L?A*1ӽ9yxK~_xM_3j%|y˿P=jܳ+aF5# O5 92[MjsGZU"n|3Yw${cso!?n|{=Y~zp*nFGPTkRk={p2uSH\3rƕ }7` !OL 0AMaaǔ,\^{w\*4 С ! u7)њ}j6IO1q!0z(rTZ|*@UTQ3Hhc Fiw-$-wdU\ v4(#-xP.<X"LB [(صB5ٮjW}ilJNHGsBrť3ǡ;D2IAd?r)rCLT -twy6cw9Oؑ|y&^KwrP֎?UA!g_.3!Apٵm?P}@6د޼٧`ouR%bvq>_R6mmhF5_^%GBo;t9^~%^Nja*i -ooNA8 ~x],f?HBm@80׷Z -ztw]ᇔ.0^SN\-t# ~YNeB~|6IF4*ƦRayZ}SMn ePpE1ay*MEȧ])[n-b6EjEUc.i@"u۪j#EI48=Gba֮]sCVb%'խqryU%6!R.& vtMķn?՟^]ߌ/.+Ǿޞ!T[(ޭ·avZi|0uvFx9ݵrɾ[s=~DNNnܾZ* Қ+A < tM3aj@]c(]zR?rx8>ȿUomN/iuO't ޸k`"dA0f:O 9/|vwȋ!BU iȄ AE"Tka^Źa|HJ1e]۲sWY׍P 5T-O],4xǘQ_FwjF zۻSxz%`zp*nFW (ʔjdWXO0'h@?U' ŁP~  {Y\w~N)_UC P*"wZ: 9&, VpObVw#HA{GO[^ChG<>snvrZL)`SֹVHhX%qi:2:z)r4HӒґ DQ8y@WH=Fd Rk9ۑ9`B*{aq>k4 >8匱.ԑ*ʁCM)*"otjkAPŅͯ5B$A/zEm{B(pLq)A|w4As0{iZO}Mϒ;y,ZSB޸nvHrf!<̓9,MkW,4{fY@In\C uJDF֊Tnŀj.8䙳:u5Ȇ k/U ?lP5ԝU22? J M'f1ݲHF5/( iuL%@h2vHo{*g-́lf-aSK B {;ƌy69h!|||U!A+q2zPz7;&ՃSq6ͧ(0D ۍC;h=~ Lqhy_9 'G7~hv5Pr뵁~Kߣ4gb\swpgm\@/Ga'5>`"MRx剤^j*dQOf?f4cu!h+*ikAF9|pKuMSSGHCQ! "Y%p9h(f7zmf0% PI0"ipɬ4Hr HTjtUTzd LQ#k@aMjB^lhy_J6š͊=]͚k&/yd-~y5pgx5Mh)W[ az33,o#qa˧FBpf -2' r,d<20+]Xz]&dv^}w&X ]Ԫwf>7D 4z6O棳y2jNdN1PL&~ZmjBTڤFr Hۧ4DP "uNNA'K565)t@!A`$S`!$K(qxY,gJ{6_!K(} #/o:]M5(lJ\Rn,K]TUZ@Pmkr#n<1=!Pepݺ}(;iT5%?~.癒AeSi~ֻ2ezR0 TG"0_A#keM H*RQ%0{x?qۊ+{!)СAQ0;[_M-ҹsx~WT5:Qo*?a0S97 ŋ GUc)b(VlCHp,EP4'H`!֩ ɧB +K$5)7oN?193V<LARXoql9heWʮ]y+ZY1ݥ*FyF7hB2&%7 3WURJ%Rn moBS4O&OMŝ-Ejuo Ͳ/߂)r.:M[zH79DkWi9vsh b0W+#weV&96 dCqW~%qE AY*Nu\_B# $%xE^\yvܻ[ܐ80owUď]i0-}oRY?=޺}!A a[љdcB??Q{6P*ag~ItHEgVҲ K|ڹqpvZ0;iIYb+ѽ:X6xkmN<"p;ArL QD _FV#ZێڨU|M8܍x=KuWEIrnW"_%\c<ssr{/I=M~zTZeyb|׫rz~ 1փ+zwl>x#iSS6S|e>jϧB jߚ?V e0?JQyw=׸'a_sB~.>ܰ6w.K?ϜRI󻉛y||ϝ;W~feܕ m-7DVZ\| 8s aMŸ=$C*X ‚Wk<*KbMUnaS,nnHD {nP&M=q3%Q¦B]!n!fM.A@0I`xC5ݏ7w\%O}*oL|H#޹|^_LSL>9/Fx9X<y!e${.&7IDR2|Kꆮo$,gs%u|/7DSBDIcq_fݤbӸRϵ4fE?un'5*:m,ݼZ /0hEWFDnmMYd_f WXXЪz;k)@l]0Znc:46ۦvqw@ g3JD]RJF~Sj..^ki͓ƭ ;jIINlԦ/m31;$uVb2Ԓu>Y$;X"@t+QH\@e=W% RK u"`4#ՉQn)~m1U_D{ h/7,Ymnh.6hË18- Ũ]pTup8pcbh1.cC8uH,X$60NRw]"c>Lt(5眂8kwĵ8 Us0LiPwiPwftBZ Q*4J"a$)ʼj1DZ!,85*uT;?*@,"A,}0f#_ %!dRh+Axx[ĖqiM $76]n^ޠ6MܸQh-^=o3Yo2_)Е-6`)?; F_&@Iu18XNJ&[Qyu++L8,ZVS 55?IxctnHpj >T8D\_6Ibq1Yܛc4ndt.Pmia%>Epzvցr:~׽?X;-Mó|YʹrOs Բ{zUHIXn}zpkS?8NpØ':K# y&ܦDĜRCݥɎ*U_iUu0i W t#: נ(m$tzw `!a/@ G.p^!FƯr^NJhif#ɕ"8Ǘ!%"Z/BՒL ("홖R̈T -҄^tłB@+6A \9Ӛb|0EQhۙZ9O@ёQI,, +UWu ̢I-oT.J2MEz1VB6%T9!1%(Nv!@q{V&9BE#z8`.΍ejLC PN g@p‡( ߅p 8EWzW- BDHL0 <B5"+9E1U-΋Bt#b}(o@r^X8%ΐ-F;%U!ohQduM)1poJApq8JH) nÑBF/Fr4=HAm+FYF"; NJdX:1dxa`*ק(S קrcP44UUS%"P% HP K=}-0$Gi=| Dc1 CaF@}u`ʎsNL=918crS7 <Ӽr Oze>-t%!/\&&˕-Ko TƖB?mM**)zO [%.{<$zΖɟ~t#C=^Hݏ~y˫_0]j%Hře2EtB@Ł0$WVf$jlO>yi>L_y4IMEtuoyβ/ol #VM8RgBdtd$B`A˩– +i6 .WpIҹ!uSʾ:aPEWݚeE)bhߎ62|,B2r0Zf(z&H`#LKDs&vsv?i Dk1q+!}8BB^ g#msXIH߲Ln/>U^C_HjES^c_/%mz#ReJay79EnGÙ"̼=8Yqs7!WNn_ i: `3N/d8\]a&X ^zӛkj9I<6?A6dt>_Ǡ+8b^z阵˯BT'UVРF-bxڛ`wk5B8Q E|]w Dkv9{2`Z[6%`-KgB}ߢ\ʥ\~a`8X<h7!d &R4c-A-`eyW\ؓ;Q²kRuJrH*G'Ik0L ϕ&{⨤Q0BǦ2bH a|fF aψm\gF40 eonNс TJ8=Ahd:F!a-ɌWSCte2gR7'ӡIߖo=΄;Tpky5BZ=q$s̓1 >p`[Bʱlwp;OZ!*. #k4JPIGdzYBVվa%(Ձc>U]2;D5ϛM݁7L{Z9/btF#qj㢩ũS_3ﻙO .3jG3jϖ??C /0¡e{Ou\QOBآL]%PyfzY,YIya@e5tt*T`5 fWqj]_~WB5n҆)'E~aGt1p_WkIt!uM/aBCNV1UM~2QJ4g+FzW>n QjO 9;4[3PWfAɾSWe;9_/lnyމk~9vWj_=P?l㼸/I@<7[E(vÙ9;ho*Z B;olѕҗͰ|,-Bd/L8mZ8í"|oȼ~gwk-cĩguz=h!Nڦy+zDPz:) HISIDM$2ڝb;x}x,\T E]{˂\|u'3ω&@Ÿ}f#ȑ#EOih ?1/lT<x,5G $ ߯o?zWLSwہq^g|[:I xҼ7|ėJjw]so錇޶>4˯m3/]μNҽAoj>GyZ#۲pVIF=5PII Ƒ)M)e-*qc JmrO1r҇׌"MJuug-L/d?y5O:eLm9:T+pߥ`ԖT af?F* |;ӻ,><(_G]?(_-_pkށ%L2XoKT3`numSe|}j2qc&Z 2q-(cĊlʌH1H9X`XKFKk,c @c!X2•DN^ F1hkSy)uL{jif4_wG#J#JE%dZE"5a_ſELga 4p~I^?2x_гڛͦǓ߯h! }D/\Xո]Elt&4FA~|ȯM8 "Oȯyl*@E @ :🹇$3 \eb0U3Bf"FcI'nUF|:MLuX̤ "<2gT;'0`x4? vFf,֒fg9N ӓ[ڊ>z |N%HUhx>b=XG'zJjҤ-=s Z` XEo=+6Tj8kV-Ӗ\lќ5Dg~&L`fchGЀT:f|LxC),L_wa`:_Fֺ$ XEހ:uzӺ87 {r$xN9%P5B{uZƆMGpZ]駬9yl"설o\A!]~RJWeۦU'զk.$lꤋf&ď!z[p޾Xb[)P|7 K.Nr/.!߿7ӛ+ာ]˪ %yC%jݼ]*LɉrzV8=8WW j=/f~P _{?MkNj-ߦݹgq`n]9AB* Œ01 f^KGD:o#[_Un]tfVԃN|(\:.@(Y!jfsM4\ZX(@3Ճ Ы$7s;ʳ櫶?"tK/z$'Q ]wS-Yp;zQ)DםcGйљ*=(L 3N]@dY3Ўe3~>k!pVv5xQ k+3*;S3@aLKf9uf&P2%C!gCQ }Y R)WQPR5hpI7-ҵ`纲\-|r8|Ώ-nӞ+܆PuAfK]|[|% v w??vǓq聧Rh3)pB0ާ́nkkF޶`e7#!/\DdjײQ?j7])yDAщcv3ʎoMh]քp{N'y$%[,uD'vmSݢ ݚ.12Ū'zvPsDAщcvJ=}khBj&$䅋hLU0?sX Nh[jr0Mh]քp)ɿ y݄8 4Zc[nG|$:[hBkn H %b5ea"|\$qz&Ht FWޘ#[fT"U9l&1zsh^S96GWʣyޥЙO֍_`xb&6~)4F[xuj0c+glx"B 1 ]>/1#B,d5mXFZ3%*MeUKG0UMڔ!+o5BhpG1TCS*-V}r -nu]Jҝ0GqϧzX 0 u6pGInMn!} (hkҁ|2TYII{7;7ʯS@k>&L`\ [,ոf\2{OY\#d.__|r9w~M=)w^reHhRʻDɧ AS)hN{#aGݏL .dai5 lO/^Jm} PjefS1b Bu"՚/F#mc+J#@P\YbۦHn(>d}ϐEɤ4Ce7RSu<&\jMm_`k~/$q$N"΄qB Ɓ2E!b.QE{ *-'T|e(Y6r$-Hdv{G,s|H5{'!(RJ]?*(G(G&,&k[uQXfE^݀W4?ףMS@WMŨލm3zX󋈌.2ࠎ,GY3G9+ؔ ,\~ۯ ~ Oc1KXq MF8"- * J<IlJS*e ctöfĜ{p˝-Pj]*+揦j>%n <ɏP=&@)?+^=ŽGj p`J*q_`; 8A;6I޸S*Zm/fm+Oi](Q=5P4p2> Є vQ x`cgO :3m~KDԜL@~m]6Q Dfe}XftABRw/H>ެA0qt= أ"Όn;bqVm7@qCͳE|,SeFAʂ#7##6O`,38r\}Nj!XN>iE !A΢ 칐_9sbQjKXRu^LVu~y;P BXiӣ9%Wi1z7R?NçOu`nz?-rY}s(ڮ 畾"y˱ Ȼ/T(+,}&.]'eVߌC [47gi7EoDvSa`؊T`SA_DoJZ{cwjb;Hk;79|Q]4[ o5-`*BԂzz'&X=M Ҙ2ԳB6/xRTTJvB(E <qAsBE/ ; BL+ZE6.*j>lkpۡg[!;Rg[q1)4~eB޼queZ8ʦ?R!*WYv;OlmWa\jҿ[t_,H$ٯPx=kNwkXEX.ÕNBXۍ m14Ŧx{t 批?_q997Yo y?.ApeNi=wX-|iI9mSa>LA\0Q.{Ϟ}&Wc CF[pG nK N@(V-Mr iQ.xϞZ'MWG1I ) pAX,?u)C4fiw~ks qk;SmGICA)xa9.>8C }/Nk(ٺmJ~k!_uu)4#Ud/K].&4lr,I>" VwDx*s|Vm%4B=$gǎ/_7'TS'F p\+{s?{ K^:/u`T8fccS Ly?F$>3FGb}2sıkt/M]SR+j ᳪ,g\P59 1E먥Fʁî:lt0N[N1Yb!x>77UR >R#1f+FWK=+C:>$#RGe,PQg;OW5YPKNsA6bR\F䚄H4#}g=U2ڳ>W.IKqØHC7a-hV\XGϹd.H(cowl Kҁ!HgG_Wb^-V8SBe=b 2VnJ`C^[S~[-EZQ1,EfB#3YpQ TiY=}{ s.+-PhI i:[d`QO̲҃ŲsϸjsWhc+^0{[)TEKPʊ?p-]F]pZd-GXj%74(<|aI8("5~kkI5}Xu.&' nҼ7U+dǻxS0gqXخ@,*\_AI6>T}P_Oy!$UPhE=8jl=KI ňMD#u`S0L*9dIa;]Ԭkִej<]>hܾT 4C'|?iDž=#p? E`!H+sF׾vN[d] ִD򰥘/l Oqj*NCЫRI>fAzd\!wGHr᐀OJ+MֱﴨE6+rӅڬ ?n;UywSfTΡש?i([ќq'ΘfL/M^88 狅}V4c$H!۱o5~YZXs=:pvMPvMɒ>8#wB&*ϧQ=>~v`*"] 5h |w%bL. {ݰ&|+PdnC2-0A ̶B)kk}H-IB2O=:L+^ Њ_ U @~M?Ka~=.%W}zA"8lnoh#α;j#44i)pR` ! ˷BiKiR.̱4t˝aodfߪj_ӔH J*TeFDK>a";PJr܉S&P|\yK" 8R^QalfI7ZDa=d'>@E2WamӪG;|l )PVb.h"^{؉ qۭͽ:m>j]J987PԒAQH[b0(ނ}iwOVRVE6NN0jڎXTLOisFw7<ۮCaVE=N'v5^f mÛ/ =.2\Y@Q\/sq:ys DFP8u L=e&ADEo|n.\Å4Z'T2ъ1Q$ۚL$GK) ( )V}_?Jq!F2qtjΏ}]ܹNC|g./5ZS&j pXM:ѻB|e] WjZ1SH0)XS}gΧ,Ri.'^ _ @Wˡ4֌_M{mW^;1u5q-k:Lk  mBVoNCWxh\+DSm/%v~]ܪYΞ%נ_ { 0~خvEJ>~{w')QhF caMrgE »:g X lX} <$+f "؇q*n,FH.0pENHLf'~n*j_BP?);@jv0[.6R~XGh6Eu~WpZjJK ) #^4*_vSUG_En ޸F.7{D攻܂J-7 /T>qnl%ZU&@GK*olL~K *'jӶ*[k=Au3 .TXgiv_n޿UJtF,%L+{7 ט*" e)wr}/BQ},e1"z7x1*N^ޮnY~qxnċfQK#> Aӿgw~bg;|(4/iP_]7`β/od1#4!+OsTV6~HkbZ*rv>ׅԷe5z&?0xt} كQ[B줗\κc(; >W&MJֶ%8*PHk+L̔Aq2[ ҫ8JiJ1Z78I!P\Y꼰D qBZئ[\D_@/ ˍ09Xg_ y˥ў`5S_WU>xVI‚ Fa__Xz`Ig;A/fѧceF%&rA(Tc`m*NO8'''p(+1TXj.+$AT Rf",/I5ͨٔ<@,>`Zm63#0`]ɖ0˕S>֥" Z2;O]E{ @~4 !DZttuy@?RR{oGS;𞂛8 v:a'Ct]c!x4'/&sɦ~)sݟ A6[,,&ch Z+S8Vbc˳~X&@4PoƳ>Y<P)R8%\,RXd0*Y0 X9r{kqO9CcY YEL}dG]XOQPղJuC!A5̒^Vf–ğkk&+gijP)쀞$ o,Qj$6QHSoGE96Vk-\ZY֬\-"Ɗ>wc;Qrٝؕ< ;z>^QrG\ f#3¬V .|LUdSrSF#Ѧ ~zINdzˆWM+ ]4 J. nrʔ wAjW'BG KiYWìaWg3{x9Kת$iy}hGq26V,ذIY I5 rW'0ja6{97=|7/(hWRIgi#bM.EwҮPs2gPIhu^4t6jn:VJWGTӷ[/ROT4rx^jWaz͡6uKCSczT4  ZHQa9GG&sJped0@ -.kM(ᶤGB凊PTEp;=oi :p>#+/ PJhpK9Kn}YYrg(f]J= 7Z/sj/ҢxH7xp^03fr`bxu]Ri_8U'&K;n!jQzum{yͺ)hQ~ 5`Q4o^1"C6'vohx`gF:lӭy߂sNXSV+ҺFCf;8>Ҝ+YemǷǫ#jlB$NY}wBW>֑ūOS7QK_PHO鍹K@S~ZfߙA==>J!hCPOwܙQܝr4R-6Õ#"ڳCuBQDF=)fIhSg-22x E|xkҝLԌĺQ:8ExkaHГlMe3 PN~Bw0;WT<|hp766J?VlZZuʷ 6J]uG(nZG$$Q4J/`cU.iT j4JVG(qjBNnGX4cZ(h=F-F%Y:FՊWmIJ Q~y9!F<`š/d Ӛ!;e }$&jV{X(i✓"u9efF-YsY}v> ݚ;'TԃE8g7#)nZh5Ʊm8+3CbL뺗wDP8&` aI<E#Myjgd xH10ᕾui`/7~*_``wB0!$S~wu;W.x4t@2p[x<6 9h ꍧ$a &%Ji% Gj 3Jb _|ԝ>|׽0RS*`sMIeRI("a$2;Q_hr{58|NL3&75E}N4>ˆ= #BiyLG?|B#HEG]e) #XЅ<50=t:>CWK@MyOC|x6x`7}o h8g A픻((lg]w.e\κ y*вE8M]X7m֘kPqS69̰DQRDbSX* qcg j4 )٪J0yiMGj)tNqJ 9LCjrdU >GUhV\5 Yq  j":Qu'XXd t8 1>Ǟ Sp) 6sUG!0Oӱ :S(Xӑ{̗dϸ&H?>?)>1y :5#x#(@7('ә<`׮;[g"?wf2{oЛܗg8#\ N0 \=\13D_Tg:nhS82+h9~bJU槣2RXy`8;5^&ɲk6~n}EH{I7dGMo(7*.6"<$ME3H!N]o2?eVkK(;ëKFTbs MRPX%6!Q4,|Ɯ{P$!>ndp+ qЛ>H*lTdlB@=K}LQjcF5Jdlbp'a>Y\ 'tFR*RPaEO Qij!]<\+,}'\Ny@f4Mbk)<\ MK BuuBQfl3[;-lfGi <j sq:TiWq: F`b! Ab :y,EFgN7nt = W巪Kḥ8}?`zre-p$Q$gdR^)ѯL?*Pe 'Ŋ$ >I&)#sKLԺ-)0$] ޙX+ ^Na$^pbiUdE#" g4p:U l8~:X)V+dw^?%OFV'Hl pd<"(!wBTiF qWyӽOqՖ1nR9enTrJI0'ƹYlw3:cmhsW4%FIY2ΘG^_ p|:'y^=x s]csl"{!7{Cr)ՐvxoPl[خ?r]cslVyK,IU["Kve,D, O fʑ. `˛Ԑ}5Wy[q\˜^L"P:u_='նzxtC /,qޙkT<ԫ ҤCvMnfh7Gv1dif?.:gye{SS/4nc39{{~fp#ِLc`48j7*˞sw$j/44FFnu3z&yxQ'j+힕yfk6V)Fzt+}n•MCt҆lDWO;F4\DW8h2UQ;ɤj壕wkVD+"ӖQoYz5;`b3fl (R~PuQ˘zɽQd!"GC;6(FWXLbBd?ϮѬ%n|J]_"Wy!,p]^[TU oXZƷ?y *?/&_p2*˟߽  瓳eȐ ;ޏo-#4:j>pمhwEΨ:]Nk:Z!ߴPoWd}BE!@'!60ڵzm"tw 5}H;< thvUQJ'.u\$H1Pג =Z"Ko(h>qȈ"x^BئI t.w/Q&9\Ljr[PDȲ ڣݡT͡R[f9S!Z$ZKhyGW%ц\V$ HOiplItrNUd|`6u_L'8@Le.Ugg|xB E ګzB=aY7m_ϼ,bxWs#7^[MT0WXmrϿ!`\dW{_YTtřwՖ1nryaeއ`zΣ @{˂fESƤхJrzA? ( Dƭ"T/DiLv]}ƵSӻf29h0G; A.]m(v\{@0S-p k"(-shKDf}60"pDbpc-%2ǣ8.^Í5\d‚AWArP"j5 y2`46 )E\\zFLm3ޮnd3;P,A#zg aP:lk缡 e}7A<oX_%> ۈSv܏,gQnw?@G޻P S ?Lh4Bp<8gFK #80aU87OÇ0 %s/i5 [ƐaR!X{&QΊ. y&3Ռ)؟ r"nS%>sa%zz8r3N[y]ң2'NXޟo00Cs\EH$rV2VnD4T[ƸWw]ArvF@Vp UPa>X:G|sXG DdhvߢQ>Ңo1ºo.pﺔh2Pl(k6+PbXD7 T 9K:ns3gLJ|!å;$WSrBqHŰ vVZsҥϫ-c|3&53:W Ib9: MEC.* U"JpKMqP|vNJ{5jT`q(\m &W2L;' a$r5°@:۴\PqdٟK&SG*s +-"*_ތX*ˌ9@ҘNJcW Tzy۱BkR3FKq"p=hZ;|̂nO7gkٹë'SgɊGb(dTHvB6,O|"@m~aNa=M/7{5q!\l{tJ[F&VIgVmO=J\勪鴚/w"z, $!J< Q<-6Uo"(za~7Yp`CM8ՍS8ՍucEZu^ςT[c-8fQ4 Ysȃfw(*CQ{ Ӯ"c\Pg8Jt1(΁9z-AyGP.𢨯ڐnL)TDvVVuzi $ޕwgMjbZGޡR։nf.DTJDt@T'N81xۺs+#w@eZқ3ޜa0.RgCUH A~M.ZK^paқ 6ubvu,fJ}| |Pt稈#!i; dGn]P'#ؤ`IŸc4|JG%"QP0)I-,jO|]FqWV"A'p]F=y/UG8uCp)a]@Ru#r\z㛛731_>~M^.gT^_'u~ B_=F~}yka6_~N.6}']+BY: o=:=8eO84'/~X/N&__R-_*uMR 3UϛW~aw&1kD\j)$&t@ ,0.98NG8}׏> @"ЈHYv +e^:26=b~O:t4S֙b O!2XdU)2NYqʪYXˀ-Ux11hw8CΊ(aHۙKtXT$֥-J@B)>b\$Vx+m h@*k r% \#(A<Fɗ3h砭h#y!B\ i-x6=M~Ȏ R&x̴MVkS@ }{ӂHBBGKCN$% W;뉐轡$H)C9 ^\W2x 3ĈrWR6)U]@0JD+`dY8C40`ۈ.#`F=rF1[^#')m#ސD6@[FI :6D  bwv/@/G"6QaLДfA+;63JƿEB&8{$d-HPZ:u&hKD\-`DUz_yK%3{0+$VyVLPt ^C7y:Ȍ)TZIU3J4 8= k3M4TV'g4M89QY84<^E{%аi|F5bD.-Bp>rM'j,Q1ƣaζ맦"=&H"1{6}"0iQxa+* tLbe3-J)(LmO([ZLr lqhJánžۇ/S|S\L)Uil%!tla D07~[Vڊڪxqwz;Νm'o4oῴuon э ϟ>ayV TZ朒Sᥝwn?J+R+m`oZ{ FH' iƁ}G;o⍡?d?Pϭ~2C54c>I@Yፎ "ɉ!㋺u|gH@og9KtVQlG,Q!oT !حAUbƸj#F"A|8AT"]$ d() =;X8ׄ>{(UYľ}>5rDr5/'! AU%nh̗ˇ^}]rF(ky,rQC(D*6%XXDUT[)jnOe[8_]nQ"T%Y?L_U9[Ujz ##c7?7bSF˷_<\^p>S t]}"("s7+m~u Ʈ(air%Gg b]/ڋG%Ih37~BJe!7U³F "*?>Z/ѯ {Mt{;}ӘM~~8}Ӏ7fpdJNkuY_ _pm*W4~+Hv_RFQRv@)0Fdi UN!-ZVǗ~s.'^L!8v@&'#ٿZ@'OCȀ8]ONT@a2_峃q:SM- P]Ҥ!HJ=.x]B+sO`ژ$`9~Byc@6oq#j_ߠ6F;FSp>IV-}Wm|,yK90\#'695ۼ4c\rY+ZiW |\O5'CXOI> Id"S㹞J3ݍ-;P(%kr 1E4 kAIL8hr,z i֤TQz3;%$j þ`;նpK92@|4  D 5R$F srD`{A*yF-T6Ev]ߐw֩kNwߺoVvf' |n!7Exv -7W&Knor^O?+yvu\3}5Q{O SLGՑ;m)`?"ned8cл ww)l$:gxxMֆlen&;X4x_vRODl0tOa^58G9Zi^V ;)Cߎyb^߽L*v=g Q;4ec8f=zt6O+UNetm2}}6Dmn ozJ`U<:ךK';JeѿmI_-湻;j3qד\Dd406ΎFDrD^WbD}=-9ߍ8LCFğP'DcbƷ qrqQ qG?&s~\Cp>\%v]ϫ/'6'|bQnW1fh9W5 ]65EFdrѹRpGpH{YAѬ38 Ј!`1r`Cslՠh Ca!Sn$-Z|.̛[5u[L«-g15:c194y5-#P'`A>$Uo#I=#-7w#~=|Z;m&n4{#l"?'^lL IՋ?5r?g࿽6hp?[_UΫ`3py<;dsOx(2b77QH:-XvvV KYjgEbZD{Vko "_h~F!ԁ$ mlI:ߑd?߻ʸk'!(Tz4嫶147^BS" NXΜ2Z'YEm$eU)tބѴg"O~YtnVSdf:[Y3FvOFK46vBIPYi¶(۹7  k=ۭ "@ HZp)%LxN+kUKpOù ːg!*ĔlPD|l2NZ%VJB b[dIײOwC)E[#[$u*t7Hֆ&ַcj'`Άngt"va8e-틳[V +l= 2쏌Fn!߃JT;UTt 1yzw$BAq:6|>_Ma ŏw_Z^|$A: \ SKj[=xƖͺU[:kN9a0ҍG<:GxΛxM_8$_aTvㄷ%2YŐ^GF(G!a>(1!䢢EfZQ< A&c|bʙV3#Bkgp8&Gv,Zm!LQvsWD1B6pD]؂9!vml/m9gA,2^Mk.4ziWmQ.d%Xڔ˷GBv-v twqN-#EQ?֥UR'@8I iH)\R)h fX"`"ʅ1ìрrجigMop9S@)C,*`H6RADU%1ck"Ҡ5ഝ" śi(BJg(ӇbA QhJ(\F䄋Q gU:SWajUf JE:ifބ'8+BfB@{9A(K֬Uaمz<*_zֶ-AFJYgUTU\jR:D><^`Ҏ$vu3;*;?X=OqL"Ϋ GCG{"9 :Я_ cV^+)ZUx0,\qT/>mgZ6\WPzcMkj{a\ǘ_H#x5O=DG*'B1uʑ˅ZNWՒ fS+B8tǫy?]sG?qY"SHN5lµQ]͚ͺ%J^wHFg jytT\Λx#. : G7hL6|vڰD&:7,Ia-).#Q&A8M:I-jER !9ԅ?yL+Y/5OEhtrZ! .k0HJD%ǐI&$ms*Ʃ%(2Ei+r 3ݣ IJ1Zݲ{8bqzL5nj Efvݻ3/;0xvnIvU`FJuD23UNt%EAF J}IG ~M>)D  ^>L`eԙo 7wr/5qtvł1U [HD[E1 Ryc>iWه_6꿢JQOӜJZ"DQ:a,96LJVm7BP`9΄h,Fjbpw"Nrtygw[X% аs#`AXMS3ш>.̳f2;{˲~k j~RN9qv c#f<^> ⤛-Vs #i2Oȅd84u Az"ewbHE$Ls T[5Q9Ea`{{<ctaBXρD:G+RorF?2[V8 1Kh'=c;}O ^˂d2HLf8Ci\8 6RF'Q/DeBPYrłjTPgT99R8hz05p''Z1 甞ǜ>$eU]}k(F˪m jzs*w#SZQfB曓澯ZYi& kaD50 S̟^Md}Hb_ w;WPgٺ:n?.a&tow{WSy 1>7gW%I~ Gt̞Ni,IU9uį|9{+YG.n'W΋$!o\DdYjLu!XT ڈNjX5wfOin9$䍋hLjTly>&(m0;0 f=1U!Y1MIeF2kwnl;ilkEn7k4픻DNv%Tw߀_n?bЊm&9O.bpɅo9Xr!mWH0^ӓNE!tz&(Ъ.On^7QrD~uW,jхP6K5zhbM9U^#=9oZ})q~FߢTp)rHl!5;?Ll^/'w>x0 N5ו.@(vOI#Z velPQQp/[aϙ|9ztTYk%T ZaPeΔ!Nh %j>pZM"kQy֬7 J4.Ro{(`48%^A2]*µʠa״38RCX“Ma^>.Rv˼ ҪAPQy6_Ͼ?yCՒ]PL{݃2s}{XE뼝q:lovȶlMLJْxl$"mh~VC٪^\UaE[>՟-96nυ줭H5fրTmۯihB-;l$nD?\A`@~l~TMz(<.wR@T>Uq-p pg\%<ƘnyÅ0oN?:P|z ^xs"м,;9|Tl)PUp~7[ߋډq٢snP.jQXN%Qu+i@qg[y͢P>OV(<Ȓ` ?XHð[NCd0:8LV3KQ@t)*;k( C0A+q\ T*9+kmnr=7ʥ%OE$*t-G \j^ݻ_L/{揍wEP5vZñM\\J~xk^_l.VSibXBU@V ^)d\ 2[ u=FKsuZ@SvN.lӖ}mΒ ʺ#XY+V^e%ZvÉVlHK6T͸/s)WoE̎6Ι#: nE"gܾK{EVϒ ;f l,Y(Iҭ[,c9k{yU,Ѫ;vY ڮgȓBiyAYBPf=|^*yLv81~mI~(~'W@zaIp0;(怣[!#`uxػ>b+Bϧ2~p~*cIЏAH Q0F'C,@#0:9,n[ɒg_aYGNu ɶB~D8 NK1 /zp,)ȣ< "1-ۃOu!4Вc%qmm"`>b;jcz l0^b}7N$e蒪pL :xTY{> Yn>4GT`\^su~cX,I'kC*m_k(]ܘ ɏfz$]36b$<?ow,n??`:E|I0V nJ iuJP:-X#mfbc"<'|{?s0~0Uqz7W 36a,=p'aſ,mNB/ŧ'7~t+5zޞ]'\,|)%wL=ZNtjP4`*@2ȋ #VAi*?x0 U.M_"s 7U<& Ն t^I3y-(K :ժ4&zWv٦ZBm^׋+hF뷡=I(CD<- 6W0?>_CCRޠ,Ք +B @Ae݋ zP WrnNQ7D )k`d ?,( IAC6x0:!/tb QD MzA&RaeP ^a+'Go"1Q)c.:WR8=0g +"Qf3I.7Ee,"D8@,!Rpo@.tN'9LX:՛9t^( 3E\k&ET񔁯tm!B6hohEHDEiJF**$P.Z" fDbqp*`F@hI%80+-J9XI4=Bea#0E hz%nܟno`XKmK߿xW/~Z_{0KR]\'~I~L9~M~H SH,[IϾAhq>>'s/Ұ?wMJdW`}%i5gtNQ`@ɲ\uL>a_|Q8[gJ7V0%Ɏpu?d7mϱSֈD[ɍKʁ0Oan#XzQeTHRic~Fbգ֑D8~)Ri3]cH'RJ=غ^}C՛ o7TDYP5T7Yv8{jt5PIf}Cț Uq~}C߮o1j߬oؼ~3 ƸFP!(Дd! SgbQf}CnPsf}C-ޮo{^C= ]PݸRQ7\'Tw&T0i+S w1XQ/+JPpz*^ɬ.*'Ո&zgHu"8IZRp6;e"DhG FH2XEqeJF[{0PlvȂ}dڧ&Qq /rSL[xs/)IZ`,{8:`ChB#鬾 _woTxV+z.ADdg,QhV.8r2]/g,cg]gO~Lk_= p} !ڂx֌! Qٝwyz$XW ׊Xײ}䍩&ZNޜjmd#m{SV y9Ր^nL5-6Pw:7Z{cZT'QnN5^?zbj QzO`(GWr6CW|Xkwjnu˱w ίph6phѕ6z̃_?ÓV1jz#9K9%1-JRe :s@5Sjh 3"-ޤ%0im>RF%m-mGӄ3AxZ =>dZSRƑf-"kMZB(?>dS[dE9>d "k-֬%LJzגYk&,Ao\GdM,ZdE֚A"kTPZ!kBڢE־ dMnCbȚ$;Zd@$Q5d{fE֚!kJp"k-֤%5pE=>dM3$[dEƎYBYkF-Ah}tZkYk@8::dM ۮ-֬%M3="k1)ZdE!k)ZdEE!kDAZdE֚1::dM`qZ<:dM`p!-"kZLJ]-֬%(YQ"k-֤%!k0"k-%H=, ?OރƦy'gݓ<O@''`B~1ͮf$';CuYN23VY.9V$I5 =&4՗}>;|N;g9ԝSK VS997S 8 .SE<b8)EDrG! Q.i׶ԓ68CL=ha%!hVʫ !&#g |tΕF\F1,)3a6lM !8F"LVS9)aa_+]BN3^R"TVT#je Rd*/<#CzM" %y,mS!(0\A rˉ{_0&J۰qdG";Js!R8p{RZƘ:Ri ztQ>g,Wq,t+nݳ\YihCk B #vhA<=hpC" BKYι!\y q)UzH.krRD/$Zp$X`Xt^~ fW~R,"Dބ~6Ž;T)hYfC8i]r nOCc',^tu+ ̠3j*Ss hBưN`- 2|I)1, FZ</&vD%UK$)"޹ '4 L˕6/`0KBһڪn6/ʽ1byx#Wwܬ4h Sᎊ'~̙H#K}8[`R=Y髯^.j|٭ݱt޼~օtLx9 < n~:V?{f2c-l8h[HCj)x;Wg z+w}2SE335л?p&5yx\1q9w Y& ^Y7d/(3 9&LVqcQsq8d`9e[7^$kƴi(l{l֏v^fښ60G1)omj+c+qU\.^WXfRkwoEGw.ŇCt{ˎoɷi۹ ؽE/.;<sH_yw#n J!d=8N G)o6^)*ZC#=լ9(s'k;4Zh D!Q@qn[L])kuz}azm}bZ5ZI(d`Q`hSZZ"vN.j+ ܣH ¼PT̚ eA^oZFiVVkt*fhU(-rsN0J˅_&g WG껅5k<,̜8|gR̓__nۥm#i3cܙ/ç8_ dvtQеwNnEۋYJOMP.uQGEJ'rS4&Sn<(1hGĴQjoVMhvBB0ysK[-JDv6mL/0PijBK[Et2%ʚ鶛Dn7=]^͗[yߜG{|+2^u⳽|q~^~YPtb5`a.p?»h~xl A16\&)eM*NuϵT7׵0]/j*-sNəθwn0˭Zi^%OصU!DHgC]͖S[ WzJ%k5"j>Qۊ Cy0Tw1e7X"f s%1۩pmzY¡[f)^~zpFW-E7TpP+~oKOc9W{;ZsEXfラ`\͵h")x,ZZ.h=]g:*W9rp 1S(XdND0uI.I>%:$opTh:IDvm $NGFYN힅<*6{{k) L եW*7B)AjwIfψ:Wb4\G{Eq?^-2-ewTEcNH=ԲSO Y(nlRf]#lLU `*q.jbBރ( cRNuwԌ^xU5clkl1yon2ɇA`,emAVP D63R/_M/5_~'o@ニ 8~B9~q^_'[]Vr; AkHf1;^e^__-Jkry,SLs丹9pf0ϥSI7ILBo*~4IEp Á{.yvXN f8%Z\1hAO pgơ'Hr׊=l#VR86PCK9DOh1z-:$R20#xw&/#J2dOZPʍZ>TUR|]NI~j|-|>W.$K&̫'&(S{éf.671}_MM &#"L1Fo#53fEr<-fBfńRT(zsъ/Y++tprk:Ddʵ$I!dHLIF#c"JSkqoZꁺ9y@Z_ [57*'h}ZU5]ƹݿ [Eg@pJ3֏9Bp {  f9oԌIickϷGUG=%)Gyx_#Ac'0kJ3ESi")WkVC]bV9*Ͷ_TӃhA+C@4z;C\G/v{@!fƧh+u>h,X-D6y6 |L%&7H,#"u4Zh< TGɊB ;EȜ4w%BPfvWGF%9ZEL '|r/QQYfwiS/>͸ $@Q4 6VES$k%1{ 2899SfGsEMne"VxϜ)LޒTJ(R&J ΃2D8#G߂$δnzOD~xb-` RGڟĩ.2,Й|JQΆEZ2h"JS+v0-nOL˽"wq~z.dS9w?h:$5ѓ; aݞ8+&}[ hgFO"Mߕu<|D<$M4I)껕q$K 0 GV|G|T&!hzsqϼ}@b(lN Θ6C;2'Ƨh2έ ‹"!L3 LIe!JTtS~hf X`xk?zK.z>a-{`` >-Lu4r8Kݵ%wXqa}V W!. @! p7czA G틍0k9{"SヵX a-wY 3pчi $!^܌{!5nCĖ]q3w x'hD [E.$4`$@}55zcҧ~eu3U 'S:4_'|tڼNjvjo:FKc69 ϕ[2rgc`PxbzTU׻ve^3J; \5FR:;u4ם6Ҝj A.e(cHyI r+C `ɂ:Ks@U\;,HSu}[7* NpKkZنTkk7z` %̮տUyO/6K3*m=땛nui\_zVtK_T|\%uB\QrQlNp?_Q/Z___|LUCffJ@;2f M1( Z)I;PknxdFeKOŭU6LDq3Ei2űQ6CN$0(`)ug(*@,O4EZK$e3D!ʼn7jL,Tby1>J /jp25hݣc^vв{w6/{FFCϗek7}+z-IIƐF(a82׵AwDw'9:׷˛-eOΎprmrr& :'u"]v QiVXkg%n_[@kmc+G`H=eF7.iB]|'lV1G5 X"pѓ*;Qxe6*碄P2I(mQTy' j4*J0[YV"P@6te(gC)ۨN6{0ӵmel]^TB2.q.U݄/ cc3xy[%"Y{s6?کxZ|\E7'Xs6%~J?dp0(xX>]]:nA %ׁ_v F/+80@EXjitHk .3h%Ŕk#U#*o$\SIIiI” TK%[ZF-g ]B\+%ˏk#V: t!QV&T+xevE]ɨ(qDrNFsD/Ԙr: GzB 2z^ma1fpM@)DvgX M.ț_:t [i4;L6߆G5ni;'Aj1fedD+vx#'E,QxͣRDPBI3%4Y#aNs[Vh/ M8[ɠEmO9H2ȳ0\TVqYijկ`ϵ[VBŽnِ>A}GdYe6u6D(.0&XK~}`WBpBFwA+lvgQK&ҴAU"6&du>/(6BOEt`CΠ~6O[ѫ(t`*6r$\X^,Dȇ0&O1yHFލF$?P'>.l1*sگy˫#)r1uVK|ecU 2\AX=?ĝڴm%g$2pd8ƪF4bݜ3ie78c3ju\  *YXLI{ ^򑆺=K@k]C{l_m~-E2A\i^$mP9q /nP\H=[8h6^OGqOӣC<}o(p 6{W=oNdLZqK24{+GeVGuic>9p}Kp85JeJJQM^6R7S(Fn7fL6Dt \3$Q,jԄ\d ezPҬQmoyzڀ=%i+%Po $%τD H(-,%-פFЁ$ZVHK^3EcVE7'*]4IZ$ @J$SrR&RNDp%y^boRFFJ)UA絞߇OLj4..?=9!wo6A>MycYgוT!ꕑ@9ŠyGsxBYBh'T#tFq60_=Q£:M~{~LhIGCOz B4wBwVv\aTviAZ1^ }lKvԋ`pASnd`dhl ZZ1"QbPND!WU,KMKduu!ӃUX%72,j.j88\QSM^+]#E^ѕ9h4JoJJM |K|YR{'л]/uߦ۫Uϟ;Kƒflr0[u`"GAu >%slii>P;֜;O=D=LU θHL2Pf:زsKZr8[؁LѶ,t3B g:[e Eg1#6˝@Wy Imv ڱ!ƫڑ/2?U5?.CR`5O 4jд].Gz{ͩH++h _m\^LM mhߨ1 tXqq"gkGz@*_ t/fD!3bJrv)cƥFrf-'(=%JREm#ZS]b4)#_/DѰ;͓Wib8DQ_BQV/瓃H)IZXO0IsJF \&= cn<r WI2kA0in L3c-=3;mb/v0{d!bu—ɋ*EM`KbtWHd*1%Vn67\Вa^tA[OV(jP[8{L|aý"ж֠cBMeHRUSh(:VEM')Y>K'  Aaώ˔)48~];ÉRjCնmaɛY?|׏%ޅ^vnշ.._.ɧ98af׻h?3RQJ^ + 6μ3?ңp~>NO-QxͣRDPBI3%4*0-Ɋj>߱'RBÔAk4AY}jAiv5[vV}-uH4OJz%{/ݫWHJiN78yja͖w!Vr>V@;7TYY9otEIujf Z)=FOH\y=5M[(BPj*hfD A.fGKai)Æ134쑍)k6(6Ŕp./? ?o/1__Nzv&y/ ܹ[YBz*{ ?lQ_t%L |yi:M@R#:ga=߾Nr V?>:pǸX;:)GOQG/Rz'ZÖ-gqQ^|޻LYɲ7-%FVdpdq"o[u\ll0Y7~bfꟶ3 /A]=Z+=R%=CmVfû[ëGGy2z?Q hy`ڊFG)#'AP*9HkJ[=8x #8zߜhιqoDR ~tDr- hGM *: g@ 6 9tu9ʖ7p$I͉$8x62&RT>IƎ KD&,Y#lTl'C(/W4T.S v)uvlWW6I^5.x"q):׮չ9KY,yC,pd@>pA)g@"QȉpY8b\,P g5 ՂC(( / 3WEs6e%DT^GB J]EyH4)4MZR(9eb!lk)! Pk0 *E&N*A^p]Z_k\GLRuޭq[;C%a-aM0Eq],74Ԅӟ"VFߊFJdF1ir> -U81h+V ,1#ηWE܁~}~%JB;HJ}\! EԢR8DžW@ǭ!T4^va`.e₆Trb:C>3AI]YH+B v> vKhw^z!0VI%H$T t.)3#83##%If"TQ/ N.kDSɌx v[lq# BS y2fшeO B=p6HN6!jSt77`'狕)oTH@o=Ir=1`6A7X2=LJ 9KwGDloˊ@VpnҌ'1In_҅{_}oƤ,#T*tz,E - :ed 1 D=D J@6)5dPa %SLtxבA)$^P^@Ե6֖5oƄDIQbdԑ|9-7b Um,kB[T16P2 B(Ĝp!# 'Dߊ%c*DIhyQ‹[v˄PCuQ! @&eBH@4Rj@ YCxN%ť=吃}RB%F3' !)ޖI ,@I.Y/8R\K&1 bK&j8LG"]O0 O &L2KSW+sڒ1 ,\L 8`qSB@N"OB q%A0Z s' '`:R.:aJpڲ1=);rO5 Kgaā8MɴFkQxzKyIv LYz|  -XmAq\EA.NΦаQ+3R vI"+:=Opk106n?jNfT؟V( r5 _)^kZt1hh"CSug=)\ERJ` k$gmɘ`A%=]V= syoּ5%~8.hy\sPRðlj'd g_sXDA-"7??x6mv{33+pܼ怆4$OUR4y)ofl H5s8~XĒ(:#"k?af8c4Yr Izp\Z_q} ǭ |eEsɭܵ^,>}xNT.ɌsdQh'/M.n=JZ̘v|[EZRd;\L QvCqjx /i]O_,t閠BAtWZW& %ӝ8 &3kb3S!<}5A8r}@e Yi:FPY`8P~[!۵স^CO"AOjcQLl>B6/FtQYh0#$j~_6.sƥ?l2N\֭Y_*PЮv=ưhBș8V\_|\wHDDk+b>~,_{2mmZCY? \9K7zk4U40O`&@1 u- D$tjq!!RG]FRE3'\^X%i)N GdF02WC^=>`eX= }bP'n@,|a/`'?]CgDy*iJZH-o\=TVĦH ~=t,@b%֢? ? ? ?*~X3 eDfQ˹Ve ji2)&x9#)6M=F%b':|O:DEiCPp,2yIC; Xy$0ըx%$Mq!ADM2Uز8M[]6W8U+lDKt3-V.VU\I[]>Nn/e8TR+њ;_Nj73oQ fdzomAB1axu\2ƈ<0aUd#f2 ϹS5)|Mqb£*R0"Ҍ[ I3af25ˬ)s+N%)jsQz,TUY!CR"vaT(~=-&3!6xG޽]&XsPHEزͣq|a7 Iq''|s)J}L U(N]1Bx?抷ecJRt/&l!K+G2B\%MAJe&\є*0L Oԁ~!'R5T52-ʑ ?둠(\(oD2AGUIV5UɋzkZeQ!TPR .9j}˰oBl}`Á Ig}膵eCI-Uis3k8f`M\@"It l fǘX82ّñ-॔:_P3bWJUӫV48SzD=wO <'.Jd}z#%9i@R#V-sJN'ںr5"Pp*"  &p)6k~Wqm~.!Ju+1AW\38\m*b45)DP&d36A 4ɊCĥӉ )*4Nw=nHrKZ怅 \ '/lѝF3{ҒZ3lZjMdS,b7:a$"K |Xڼż/*_^S5E8_/|?2y{j9ġX0ۢj0p)O;~+QG JKhGr%_^"`m IҍydP!KU/./ g~⧳ )o`;Ei⌇ d]qQݪ71}2߬˥l\~?xry23'8=g ~6{zssr~7S7ތf`Ќz3lWY4rJ3m#:^LȻ厶EݲOs B_9jg*^P"`}R0 iu{RJE2=W>y!G?3SPӄ N!F+6ߎBP.a6]tĆnÝ'|[ԕκ8mɹ@R0y_(^*b [+ S5Ǜg'1HT޹5o(_Щ 9-Li_J-pe ĵr0R 9o̦>% kfD^ISa$WV!m#%W;R%AD(I*RDJli3O\f, S0H4g[hd4Zԧ34mA 2%~{cUC'P{Z.P}= 盹bOt>#_n:]iW 81Mh„aJE =y]m_vXE$auDSbO:2M{ ؒJr޵֡dOtu'uQXH[>RRɵI՟diu)D7ÑaȔ@sG߷\Ch8GoܰtF+lhL}⧉󙙏f Z;; T.RK Y*j8;7QG?xAoz}[Og|c;,Y-0Fv>Mlh$w6 [m#DF4է+La*y ;*RyVG4&)I+[W|#̽_[^F 3wYoӋ)1-q.X$\_H6a2,>5 BvnƼۥFH}򤵓 DqV; RJU=Z1ūCLQXʌ]Wk~0,{uUmDU^R%~hy߫j+"xg\5/ΕzKh{W6WS3IS\uա`FS(RWj%i:uwtt$=Nۀ)BIJ,_}v[Hԙr}.T ^-CpW8"N|P:S$z2EQ j5Yt(]ZE@$WF4`qťtPҹqOw!~ML2?aV=7/ >|1^>!0`@Q '-1Iv)ނ ?L6v[J5ez߅~Sd:[acJ=~01'^ȢE0GJ8c$c1<*tqu̽8xƌ)UXU`k2'ԥPͼgQhFp:x 垦Y͐#zJVdK&B͎!\j?#8$ Ÿw* ߉:"U1n(U-nq sDEI6@}UDHrg2y7D֘ 0D@ HKXԙ dR`+&( (1X|$}r2DC F} Mcy!X\A~{0 4#58x\D'] '2@s[a4j/Dvr*ڡzI8N׺sRo"vx޽I1oyR<<.Ը?.Ahj]u@+qwtkw9KNZVCoP}:(go'o&?@~x/OT 8G^ ex-BއwZ6la쥱ɩ9:WNPFyWCg 㶙;n/{=rqNu(1¨ºs91kT:t3S$b۞(,5<% aZ%&[6:u˷6[7_uY-sP92:rH>?*}l_†8; CބG??Ye:&Nݹ` =bneK[&Оlfw?+QY\>UpVՓ[G%-Nl[Ghm+B|Ͽ)4 GTmӴ;&Sqt=4 {yA 0YأDm7V3y>uZ 89*̝@h[ã쑢m; H_n͵S7tC $tdF9bC5b FV̧N{Lt{'_`iGtrLIB65%>JU2+h(gKsO vITi?XïL`޺ h 墂51yh =tFCQ77yWކ)4L믁iH>oss SS:NxLi_͎0VG`l9Z3YNHJ ;„̽“Hʐ^amƳF 8L/oa8yW:+뤎D⃛R,!&q0D!c2ʳԉd;'^Bm"uiMMVͧw/a zCI %`gy߮ũ/_:6[~1WY8؞O(_b$`4ݷAf]5܇{KTO?Kv6{ {RH-E.F'iuaQHY\EHKœ~SݝgJ{8_n6!!@ AlK=RÎo ))R{8c͙鮮fmP|)!rRW<,:XzCar&+9p 8ֵAU򌖓U<$!|CXt6~ڦ Z?"ĔrWlDM'!Rɹ>^*EA-=wQ8-,/Ve 뭋Y9|]ōmLjmk&j ;@V6/ZK͖K ֥-(WHU9R^-Jz%3!bQ>(_QEeLʟ)`b[nr +̈́cl邶-Z )Qe*T\VQ SilHyɊg£ j k'-*Z.I0$]6x(8XtXEtTGJ ;j .4u=rÛiؑ7Ӄ71cM,[]:䨱Q9%R; 4< QF,X - DHZb{Eђ 0!QVh s m22!'=ٲQmYG]=?Q\q]1Emgcc5Br#8O|O+[2vg m͓|r11RE(;FͧrNiXIL~yۘR0g*,&e"*PtKj,:q o0aДn${a) )H%XV(sX#GN*FI"DzWA<@` _< H;El&_5L*SH_ Ċe3ս 4Rt.u; 8^m!ݶ߰Z ~8kwC u,uL)+ WCɷΟ*AӺ ޹#\5U(Ðhjy+.ǐԸ6+DŽ:Yȯ;#$KJJYYUлJqDPuS(ˋhށPzE5~zoA9^8M҇Cc6@:85z.n&5s|0DjŸya\6KoA3ZҟgS\R6G +A~RM9k̬NGԟ"Ih.r}F HIdN^d9#y U&e5%7lEI>8~0йǫAD fo~x= =_,פ-5WBw&bf nAPo5g#.%x_ex63S8>S$aIFydm4;on6euD`k|ۤI2du8a8/sxqI5U085kj⢲zI|HyE,sW3y; 7ɑ!3/#NBng.$;B:Kv&EV` 7խhXK' 1TF~%Oy!:mBL:Ūh*0v/U/!Ib{a2$Ƹh8U ]&P28k)%3}J#AM џ/lRЋI^ .> ᅽO|w/c˓"umUdXg167~cD1D_bAf>8W#_*@@A!ǧ.3; dY&7<;L&'Rz;>~32=l0׷oWK3O_ф)E;+ߜiRt5f0~#diٲ ~tB*gI{~~ּ#Y?bIRs=S0RMSX\R7K8N6n< Лu}%pY=܍|B%*'jXv^Xa2,ʹ 4 Y,i5u?nlK%#¾Ԋ1ZL@XF'C&\Άkp;} ^?!gC#!T #ƈ8R)J5$26Jh*D&~|HXD^z,)K_C 63BeNoyo׍//?5{b (Ng£krvv(<.qMBQdm8uzQpmpNa)Ι`.LÎ\@-@kZBZب&T*BbYe 0DT>won2P$[%WW60 p`)<{Bs]_,] ܁M^v<,VEx _l #DetIí)(ţ @g^ތaT[O\-bz-%4/`qغثFITpY :clޮ'AǤ\Z8 K$6& ͐Q>PDQf|5jpξ7X33h=.tN.0!2i&XC? Fג8%)#"Bj5g,4|r/@QF@rdSiye2?O0Ur'o㧹?O5FKfO‹/sFK>k]0"#EGXKNP2Pȗޗxi| 3x8~4Y%ByhF ޡ779#iaH:8"?_z>ͲC8ΖDbGD FVhe lpR7KDIW'23}Hc_d˾ .~숰37 =aJm²_'_Tlc(/8֢ɱ=dUĹ]W[-M@>Үg!)}n~x˥φ`9dFC%@ahcGޱ}J/f?y]&h{KXsVW=Ҟr ̵j8KoM9n_ik3Ɵ:̔z>0y3>Ύϻ&G4Gpas)W܇:2` *As&W}Q//=?;FK"b[yVlxNuC/Pl WA#b@ޟ8xDj٢0 *T,cYM/X 2VQP\9z b?RUUTG f_Ȼyy֕ HTK[-g-|>4Hc&섥x5 !*ET|\Sn5xL~g3ˍ~>ۚݻpx(8 5T0)d%DS6OCVkZÃ(,C$"GAt;;cPb4"`F0?2?)_?¾)FUg71"031"ۃT*H?hΡGG3NE-}.mkWV BAs)5k_YN IT`UU~Q*t, - D`^> G^ ˧wr5Λ"F-(1%N"W@h$B]qjw}sc4?w4* x2[ex[dgTp4(I[29og@:,Ԥjs#% 3 !%1㡂?46<|" GN$'S(aƕ @ccpFۈCX"c\+ƆDq(tD(P4$eoSٯ2a5Eo'k"XmЫp gOM̯dL{8}tk;Va2B@Tٯ`?_~Aӽ |j`l2{YTb'إ9|ȅ'd+xBq8?g+'$W(QF .din֒cIXRf VD8 ˤJ3L!2*HA$A &`P+2N~#0%$b K 7@/08(r j)% )iJ$zxʀTS& DF) J+{'EŒ%,'Y%r{pXm`hK^Y^~p#tz%H͍:pC+v,z.r=፾KtV,q@+q1ANcK6):e"{k].uw@?{NHAC>le:pFCS)wG|_| eh#w42yz6*vӨGP]x@Ǘ q ~n>S[Nw(˜2BN wh.#%Ndk90i;N{0R5zوhvhEAԗN{S#oP9&٘HҖW/k1*U $ݵ7#M$5@k"{J5V͗jXtd֛ъƕ'210F['mZ#]0w WOwNd%fT43JY.GGD x:m@t:$Ӯ "zd\ތYx c[9~'hՍEv OT(i@rC[9@^aDGaB)bF"2U~mnxj `+,~K2H+qHj, !wl H.L=-}֩CXvIUx:x* Ł :f+G00jÕ~H\/jֿ~J,P~z=mw(~z- B1_TXx?_}ǫ #<+?߿;_,|~B?Lbovl ӗxXcZL9O~hCwLo?tcK!vk&!iu}&a \,c=Mòa9%$2[BXs?&wMM=ߍg./=ƪ]f]y1#z)|`tvx=b Fv5/3^p.`^VDxE@tR\(v@$2,RaZݟm|IGz3Z6cR1HzĽ<ܓsݱJ0K癲9ّ!cq=?uއ#dV~jpPp6JBrΡ(l/[1to=Ǩഠ{Tx(mƣ1.ƃ(g7m¾XP?!k^ V8u(Mp[t֛/(] zN ,|h@Cx|(lY<Ełs]`Hcj+zt;0*$:M{4/Y9?*BZPkDjrs`_ _n{7Z,zgaɪ_>}X4b@[X ~ܳZӽh˧ŨG+5@a]|~u1Lvj1"iYez#?.V)҆JBcem;l/AXXYNުbufG*o*MDŽcr"rM<[e `('u{V]57Bb!o) m[(.]';|[q-wk.ݧc7&TAr C[1>| I}@e@ds.}|7fU!My#t ImV= j+~UL l_a_65+ `P$ꃟvA*̟Ty\!l Z*j b%O<]G݃X!O"DK./'v|ljт1x߄r,͆NC_'W ?=!lodNp0iz[c[:)ZvGJ N4\2zzbUn?=fm+*)~ՓwW'cyEpLs;cA,b/:0*$ؕ.ߐCŠP2~ݨտM'Ba:t Bϫ6'HHȟh\9,j^9;טh99W칎Ot!^vmy!OB^VosQ@q8ȟ!q}R[ϥ_J7L! )3cċ,2j҂E ,1}ČcCz#']RJ +S#i7,UIVaTRf^qQThR>G:cŠ2 rIdR b T HiK+B & 3#ed"8Wb'Mk>31gIB^ i =O|}xBlaniIB콏N |H=Q[R0l}3&!T;"m.cq{9xXlמ[bD>ъs"*݅F4±)bn:MkIظEts)CX~măүO?+:gW)֛ *DŽ_%VEP. `}0㴞(j)TUڈʌc I;IpW2˃+GDq*)pFq66*WcL1NcQH@z xGO"Thk@8YFLؐs`=L40ח`ƙaB"UfţI ;O8\RF=k+4G R%'KL;%tykն;3-eF@dw65+xϲW c%7Cb/dб>{+{`5j6] ecltncqRp=upSdYP͛~OJ 肚m.% d(a1!a ./+c*Lv"%F2lcwvA2[4t B iذt"4`^x~ -UJ Oūp<+( (m'DDGEow2#8W:TEμ߽)d ^~HN>~fxvDg)X8l݀ ep&Syu0.=iL=JP E4a<" (ֱd';NWlT\;uW6Y"PKIGq$)D(pzYpKY3&t.Pgw*^8Sŋ$éLRF"D3ʕ 2/u 2-r9M%68c84RE OCLtTaD'6uVbϾPvn G Iн+旀:vR|Pp>Ή EMbi$685vMl(h$QmMZ,> ^p,gq DhĆڽ93COԱ+&q [Kl@z:[ZxLĆC3GXPNk8!,AHDgi$R"2L89 :ztgז\Ip,é/m5FM/[?_g% ^9lܺѪzaÅeaP~˧Rs9 l:@Rb]ܚG1 R^cw5;zm mk;yy} 3dKwu?5eݓ j8 sYmH:,ِ洵 .'n'!-/={K|iѺu2h|'Dib s֭3Z-1F_PiQ dTSACCi =%9}n od\ .c\NgOr:m{\r{؜7]uʼnfpj\P(N%<#Ճ?AM]Z `MLmAqj v'8>BKGx"SjQ$ASB$2jD1"`Ta(J!Y@ΨJSlԄ &8@Z$p@9KBT"vjuT9>Qv9T@pq|r4eIB ᖴybτCc$V/&no5p%Z=SPZs]io&J'+ڱ:VSWwJg$eD?FH!fտt I'HOoU_֌t-|ӝbi+m[1B̚խ125Y.Er#AA{EkUC%n%@f(cy:մ6 !YߖÕu:A)0^}ffR4&fMao"ep$hUK1$=QJ/pR5⠖ڛZ'pIp61zRwAWeBثsADA$nщS.|R9YX('rB 6·ajr%m';?C3;\0mr9ٯDrATgSڹZ%Q*f/t9KQL tڰ?>=yZui ԇ'aO ( G)Ŷ32:.l_чx Gyửs ]!fQ)hXۇay((0rhS)F{/}gaa~=` C\ނ2q@aɃ?FQăÞ?T'/4n*eCl8"/ɮQn4 p&H$۳%/Ӗч4[ˇ<.e 7nF3&!Ig;[J$%,w6Yk_."Z1Ph02cd B$ Lg*d AR:Kwk|ٺ N1׳ɚ"ߖG^Q6MiwBc"303-XIB~ɳ6+\?"VLj>̓Ԁ(.F{FpWu#nda*d72Ă2BR 2]J@t c3k,@PahDXPT:"EO@{>W e-dQb $i=JօicwϏb+S[Reoiѧcjy|B<??2c\mXxon3k^OaLrи>3a=V*, I; "$dWܿNJ/2 qELu64xݮ0n~d^72a{ZYNVlF@:20QEJ/&(z>-ޙۃs@~s13zRE|8 C6q.p^c3c]w!= ^Xr `w%⚛rZ`@ G|vz qgt9_#Zw)Ը+]?H^ۓ Y`}vw|hK6O`HePw55fVߗH=yHE;8FRۓӸ#EI𫜚'OPٽUJ C;8C'1E]'yipOlsl^{6V*7|n3[=bt0'.H߽{~4vv;kP@{;v ctD԰8yI)D㽈6U1'Z9)ӿǵ)ɦZhOJJT. kFC*ZKM޳n!VթcAbNL`L>D|xQԹjs `!:O -.P1& ixbg>c Z7NÙH (f@Bȃxjٙ00x.RIXa @i?F Bׇx\B`a KE ֨<#T0DӋ!:?I\LB 6gyڑX:WJضu}RRUW~kgK܀Qyq +x95R5StgO"gŻwo*}Y6ߠy` PR%1C$qE1SY,T) !"&ĩ6]7I^d^nF*@1VecS-eV8$_Qبp9>@uITY]TTXĹ^ LRN5:V1O8%J4y :CR $_`zimff?O ]ϻ\23S^|5U# jDP4"QQBAIRQU7~H骨hjm.#4APx?‡Qxnp"o7ҹ#[5gyto+]B٩gc8b;njtnZu!.M|o12QQoۻW.Yg&f8EFw!7WƏY,Wz:5| g,83S\=;E7P}XfJ6H͚קGA$XJlfGޝr7@n)FrlFw0b5tg;^N1fSwj'we7jZ7;o90qHty2{mSowڴS^'bGFof:vl88P({IZxjo%$YZ;ʑ̩NEΪ1zr534؄QyC}km玊1s PxNGܹnADZ^"RwԢNL* 4__/o9,kj L*MRwJ.nнRB6E5o: -trp I6Ԑ:>H 8E3JU'_xHtD[T*,R{XQ801ǚ>*r$E`( F uB"iu`\+8TVd75B"P J^$/j'x~{ 1 'u5V8uOJ)N|^/J 7u`(M$(S>{Lzu/FEeH΍ZW4jiff Ug+ +kIV5ޯtFl"60YQ7g [fJPU8_B5*bڬ$[a*U7/Z7S:Υrn_zũY%K%˻0gѼV\IDH6,Ϲٞ4Fra`g,P00\ yd2Rsr!gMftznlm~͒E$9O+B W|Mj~3.&rnN4MKa.0{#YQ a˳ p .[UV~A.N]YrtEٹؗBnX>X KVa N9y^7N`tOZ>Ţ&4#3ca\xgqUrLpVo'IK3ahEaDkc}_t-#NWfE3ӏ4QT㗪!^39Ź ൵>OVQ=}L?,Ɉb3UDtI0R3pb\wsSY 1S"#_b9Uzqs%ޕcGާ>%Ø DB`PKB./)qb$Y@@ORYĥTKÊ}~qPzx:^>K ]!?Bh!8ġ&Bq,A~ngA.Sy[ڕ}tsߐTNՈu0n  Uf!`L,reXG,B6at(wk&ǯA-~h?.jweQc@z /)O^x^o^׫05 a ^4'*{W$-c C[6g0EO7. 3‹q%ճJD[5j^s`(b_0Y s=4ʂ(:"2uɛd 0af5N\rmur_m*y3lX`=MkT AU(3N?'Ǐ&+SQ_d\gqfDg.x9 z>E}D:q39 :>X Kha%Y蓥uwSi[&;kx2i,ęG97Ml;jPĞ f:%vB8GZwIoBZ_JTJ g&R<8P҆W8@aı !;صoDnP'B`N\mɱ U.$E Lki&/R;&B#%>#$`! c4d"S;~!4Dc8,a@3VN=3;+6 V~ ~6W{:M2p_$nhR@{?-yA4 ]䫟">Ρ!"gz~@^,W/nc7B=f3۠z:m 0nDDǛcQ#lϒɁ~g=T@۠}s9IEV+BC"t$PtIamZJDDx($>R&@g9Ңh$9ig]fVexxv9~:+RXͨ൤,Vt$4]a0ƤAdvLBS?Lqȑ"OZc5dFtx!o̤"Usv7^,$' 58܇j9X.N؆5(L^R Gand]^W<!\.ȇ=B`P7G(*;9H_4H+/*dgӦƾIXSkpifo f}h9CPL/iYӼygM.kl(T XH CU6%GXR퇱,`S?kv%*닮tcAZ< oOS#痗f8⬒)B#0>)b0iл\؝rFO' 嗉Aݵ-HrHF稭h7\!_"c߄"b<0`~`XA"Jk,7#C6 E/5eH n*۸Eȡ TXwWXbW?G]c3lHj*OM'gRK&OMStvjPLI>|a"{ZN!(j$HѦt>TPhH3rn`c|4ۆ߼mm XWnQLJy|u*yF}j#sSbi/\fp_\WZP;]p`^nPY&aLppSEA*)Ajk?{cl4. E-z9p1b[(1)ՙҥQ~7,\| pEș>!Axofgߪ5Y,W0lpw(t_#%y&4 A~+Z9)mV .NBdwF%KDEI.XϻNvW5W[a4PFbѴMf-tmu} V"[$Ьpm=.N5;t_0/j'|~kǗ\sgh;iIwWhl!GGH;$UC"V  kByvpzxm HǺIN_8NjF&)lc3t▚^)h nz}Oۮ<8\U%m^Mz4qT1Z?|/J 7u`(M3Ο4Jp_N{Y=^ܢR-̊Be˗ZˬXLI3@–L  wN(j-XnR+N"!*yZVr8)wjuVptEfgiD[S냇*YV (oKmo?vv/^V^oh{Y,ny*؝{i lal= 3rv&ҷ{'\ߑV>h%(c|&ł㕜R9Bfwa,TĚgw%Ww?ҁ+‰NG/.R:{2&CxA4Lϡ (:UP5{JKj~hS6 "݇C~h=[~9٪Ԥ%d.a"QJY(lrqbjw(ô͡BlOLXR[/`s*9S?ms3xf0 a=>>NetۊRUEeOk*YO\kİlާ+y Tܑ[U׾ɨE3~ϹMxm"R5 "HEG1= `yޖ9Ali9ObK!KVXAwF tQ7G\(;8w)dqܰKt[7M%jS+@vLRRa Oԁ0ՅO̽'f.AB;tNމw\m\4 (olBl&r8B!i./@)IrF1qv8]iUȒֵԷ#G8¤}EaJjmE}6435z{I#GJݘ\gӥ zYfxFĺyi&F>|^k2 3"j-%늲I8RV'HD~ƈּbFQ E?);ZZ)FYgf$~ vu, 3!? cGakbY9F8Ѩ~WJ7FS'+I\ҊicptsZ%X Xlo3tLDijϲgYEErBaBBG`ţXZK]B?K O,ӣ=XVm=fCokћ Db6]u3/&j+kMD6޷{D1+žP 9y &_"E( 5>C4EDC8"SR4B$ ?2gX2"ZSڠ[/ 1]H C#D9C?4^wVfy0|Gh ``j﬌?I$1 ㍗u%q%i؈ OVOhULx5|7I/iR$M^vl&CZ$e3V:+<7J 8>}Xdɯ_}f\]^_Vw1bY]ď_[4n24.;c|!m>^ oG) 1J#rh+1ej/e.x:_K*k8,P³pJxP)vW?m,%.fX=Z߼[ R{CbRQ8!c)slnwq`e8R%J j%4y\EiˤZZ"1ZS=u- $A4yD5dvբwMjg7O_Էw)󡍿[#|b蹈U?|էX+B޿ůU?j/}!oڿޙf^_[cvۿR،i-,¿sKp})'uZn;hG|*nSyʿ \\C-o.b|cw[JS'2SQu &ZS6!x;T r͵Zú]b]F׌ieV!t)MMy !N F++՝[m,N`\z*)N@01UT>? s<ٝb>U{ )km4=E*!z9,qnOfƄ!T8\J~',4?H9jm:t^I)ߙ7Cˤc 2> m,NNEhE1w:BуRB+Mb"P3)Qzt믔 q ߁O3cwSp|2b@eISyhƆ֞Whka(X]hAjLiE92N {&IQ?q 2`3N|%)gq쫨09C@˘HցdCQ9"6Ok=sT'{Vk6,o[E!Ho%(mm(icpw7{cWi\0A"/z{|ט45mmz0$KT-%`1_T;\m4J+jǔb(jH旵 ސ?l䚝Kzs[{!ew.Ʒ#pT&{:}КNV) Nd=Cx+Zr$ -6(IM8 [e^SY%g+֞Xm m<|@1[f$!W XfdL6^pa$c` ԼL0H\CK(#;:j3ʃ ̨m3J$t |׉ёh"541˒AIv/j8"%Q9{@S*tORs"q /.vZHvtul[0m(PoA\29=${ϓDk`Y#d_2'u'C W3E{:V G>\:[L!Jzx Lm!ej8v2Ӄkq7:!ӨP\щB [$kUW68JG w2W@*0:?8>Kyܹ%a L )#`y6-4wFO lr@GMQfW!bB^β-{3Ol&Ҧ}sT_BSD6VUmDR RJ!P;xk7H(@ MȦLTk&ѻbb:mxuk5Mޭ$]wBs-)_qx7bkR RL6fT GmY~<ѻua!߹nK)6C!6 MLZMLD Qxq`Vdտ$krz*lC(욀q2Q%˖1Uԉ-i(ZtR.E֞K1hZT~`9[1)jr&:X 0Pcw/{jvmڸ?1ɔFN-tnUBKRΞ#OW!ȟ^Z~.bzaZaMrX 'ଇߩwҭK>#}LnߙҌ03l EQ מ&H@B ;et8k~Rjł5TRhj4Hi N]-XPJ=rF= z*}@_W'wpu1Ȩ"O]j7IC:4!xiVgР<* AQs4TX e"rU> !wZ Im]ml) ,h!bΔZ݋*ÆRełeh}-:!g0UD7\JH/>2J1.5^<kCȡ f\CtfMBH3[/n36&gLKC? $!itf)DjE(;o3@*FfQ6b}85S2̾Y}/ @V41tM蠣J<: gl0B'eQ*Uu}B~٥Ş4qBՌH*L, Dq;k\¡%r&wY8_?uH}u,Frytg Giw& 1PIaw -WPF2= Œ)q 8=h58.c0mi5Kzx0,mic|1xZߍ834ʰu,߅屎e 't8o`OCP7{bџ!hܵ`A'P= ŐMR,屿u%rl1Tw|'8wyfRHKҝ"p=:T9VF]MӝT D5iˤZZ"1ZS=u- ,)?XaVW~L?>e77էUY>J%w*eWVʎ'rFA34%To uy]O/c:HߧR~J=Z r寪;{ }n#] [/p\UͲ#wOb/9/_GvKqMv>%8\O,=ђ6& K8y&Eh:(O#_?=Fwy֩4ЏƙB明pX 6x8}cxZgϧ1.3eфeF+/'[,є3BWH&&+ґ$hGٳQNW $=аzIί z?p@hS\ZNd[il`e6)t"BͪɓٳCBp嘨mO5||bEuJӍ:~ S)F'hgG;# y/:Vg%T#g >C TynF9P>n DX"ZxbS`\V) A;D^=[@ύX_i%:~~83sVh)P> aiZ9B qHmg@zZDH#X|9~8·J+BWJZifXuԸlO ͵<5P;/::KGVy^t߶u#Tp ^O&\n@bVfG_?65[#ouf53[++ Wޕ6#"ew1H7[m P3ըyYd#[:v1})N)eWv].+dG#UB? ,²7\hNSd&MSertm2g:|5kdd'CA_ gs;gGl;U_t '-ӎ l> wm]۬xggvK/:uY>wAA/Mb6o:ƄQMGTT*qjNwbڶsF,X,{JPxFD*NrϽ*K5HK4wFpR;P{9a'u~Kd:[b|8%xwzaSʰ+(nphv;C;]J_oNK. RTk{s F{ǀ]R](wk3fcd0hy*F5L9W*t +%`i.qInт [IXOQծKxUcpm:׭TQ)(\u~/ oL])x6..?=\帼qy<ÕMxzwg|'Qǒ8/Q ^b-uCvr@2q%BC}hRYUp i&?y{D;SFsL¥(1ԂO3!9i).^є#{ &}Լ5 *i O G) !MA4z<%BKCf!dp]ԊiɞJSv$:jPf=\TAUDbH-J it#'Oo&JtI|Y8e+ۿ|[S66=#9!IO."wCod|WNqA14x S\6A$MMu*T:|;ҫ嫯bZZ{mxHhͅ.|Coʇp@47ti_:`{k ؞ɖCu+ P:^&"'>3=4SB~K>ÐLb?1—/I˓̢,Щz::d֞\-םpџBUMnwFx>cl6ŵG6՝k4LvؙqŔS I4d /&I:ʾTP2-'Aԋ~_ bOSqi(f^i ϑ, N$1I@R=ima-MwLЬaX* KDlȍZrCjJS#B*qv#m#´}?@З dƄTۜ\H s$H u喦"T4iU_T0JPBhw2EuyQn-)ʭ:֋r&=RjHz|=Bi)I<(k4;u!<7F3G %K ski3b& l.l.lS УK Dcp&#QlM5+dhOpNǎ%z4*0 gk뾈Ǹ*_]O)ÝO=xn飔լ[̇>Gmc|ՄeƋ%6%{sNPuJ(>U }Of[mCRy|9W,RD%l+3`'Y^go&nO ~E!LN~VѨbWf8oXqܖ23kS*dѵMD/g6 Ś8Eײ?ql(}UBQB9W:(nn^;KQiW}>e .( 'rF巯H[.z%"Z*dyDkr.%1\e9jǗ+pDҦG{sԡkMBLߏGC}r?tqS'I%, ؊0+a4@49w[z^g n p[Z8u({?B8E"i~W'94ٸ,Se<곪б?h7Y";)%J&yi"4267;v_cZ~1Z[#TGn:FE# sb\\IˢӍB*Te"y3cBH8_s ˯il"z%# s8 (R˕kZi]6ij~Pg/ɶ# ~O!R\yǽ+evj&Gz| fF!UTPce? =S`5خuu.8i]s V;TgĈ%cYv6Ecm@D9cz;pF,?Pu/;fK+u"$N=Q1#fL0Q3 5#$kDIhsT|Ec0c^QnR3 ~'\G`lf (~$g{8K` #tJRtts \RbSFfIq0Z˵D2-$, EP?=T (Mv W$4﫶{5D8Ef"/$pRo%mܴL,4C :H4wy2Ҡ"rk i 4AmPRu 66 ƘvZZAՑ[kPOjPҌsɚMG0U =29PΛPht#pi<^][w7xDWf=DgwOup8M25'<_pOqU *ͧ7aO|g4糛h;f+tzm4;o߻םl`w?QT0H~)[~ŧ ΊUMoF(#YB-D@4xc)4J7tG?Qx]"=4c0NuՊ Y!љ" cQ[x J\lFךG{0 0/ПKYV'mj<ͤR*R eZ&=[W#J[~a~yp Ea#a*}H{˧;oU7QJoOϻ 9 n⋏> paMw3!OFMtDpg0mF&sp t|=\ɩe hl+=~ 1vClҨFm(Axf5!< ԅb.ϝvy&MKy*u*@?{q < ؇ds 8F^}]JDpS=Hp!%$LW}u"3Sf2zFqS 6>b V iȫ+P1LR4a2Jų"kd/@=]=]\pqu9XsIHD7\gePhȢ$[㴢K)# ::%& E諛e@PT#pG{Y3iV!$WYHkVX)r)7l3ܜ`̘=(C9(fN hmҖv!k^_5nb4H*mQU/ŮlUx;ݭz;Hyz*7lGydvx{a6[ڤhNfvƀPRӻ<ѳ*3q lRAU_azW>V'][XfF1>UhWߏcuxz:w-k[NҞ=.lR|JTF=K{?ݟQ(i]yo |/JV~ _kI-=)̛݅JM wڱx͆J0k@6`y]5ζ:(Eﭣ㣫qA |ej<+ -_a?_2ϲr0>"e&XL,[!ܴ2+ v>}9%GS,rYL岘eT>IxD6#"$.YQSqfюeNg?6gȰw,5o#INyluZ/|u6;_\>8S.`-ޑ3+Ň`L;Q鼓9:SC2y4nPI]=q_k _dcOZjNo;BImf緅\]SlSYSg`Rl6 fuaFkr2D>Og3bQcrK*oA uVZ7*C7AFK+W߉KUG ~0b3^HJszݦfI֝u7[svN5GNxW<}*{6јU!nAMV}X:"CCUOaS$cx3Hz;S@4h^ARcsNLJ7˶&TmWSZq2Slw4:ѮV7UUcZrZ+i- Vcfi)\%|=c dFW"INؽ L[@myn?.4|Aڈ?-i3jOW3`w"jnm)IiCJ`1)xPv3 >$Rqeի*qVk[ZEmϿUG֓!E9{(X*5&o{^r~_.Cd2lC2E1_Bg33sAf2]֊˙ռ3ۿ\Ցwk`U,Su9`N}r^x>YןNTaf)ّ_SܺwF u2=30R9dgT1~&5SEjC)+k`Kے.&o<^t){+ؕ=dE@Pj>}?yLZ6!]ߗG}wkV"}a(ٕ=Kn_|PF1UO?u5mǛɩN^Z8]!27$~ w/dr8e $`9]՟oz^qέլ=*L&8]=0wBXi&m󌂲h}M·"QfkhۧʺFDZvD ~Y ~.o778BޑB!jp "Ϙ3tBؔ :,lCFC_4ЗU)j-V tLH@|Y2hAbb'%)z)|9g{q4|`)6{ k?.WDCZ4݃bEx]l*rL= +6c[öGS,dYLɲe۔60eCUJHt3($ ;K:Ys0mmC_B>*qHʆcd\"1\hTyTNh„JDKQCv.J4kv 3h<8f̌-"HƸ!Z9gENNAyr`!qGNb)4Ѳh%'`L6%Q495 xȓuRFK"U:9r$qn< .VVc ql5O1,C+;rd7`JZu) (fd&M2mNjwcV#";A 62o9h'D5i'֩JAs6pJVD|-Zc0 )H!4ZJn=&B KI ($5р񁬯2rZQkHk̋@]Kareu0Z(S3-QDK7ɧsz-& Z[g! Z!"$˲BZZa|ÐH aYq ˪EVV8+SoZJdgÍRQ~Ǵ1oD)!wޮJJ5*̃y;'S _4JR, E i%"I.uTʅHȎ UG*( JXɇzKw;YP*HM}Q^Fh˰ADt%a+$EI}.m@u1D9{qȟʮ@? 鿾r^ P&鐕m&c$=}Ysi-`Aq ;n]˻̭\i:i6 G2i0No97Vaɿ*MkL-ն@ 2kS &MPZH!z)R;Z6xbTSƅnuaYZ@@k?dКZu]|u9Se@fڅ5y_C="b ps>H7+knHwT݇"2g±ò/`)b AiET0p++<#BG89#Ys^`Q' #RqÂƳ?;'rslڌ6Eɒne~2PK<}*=c>˹p 7"0R1cW='0J5ZJ{> 8R*9\?/uV@m8**ax0WI ^ ]@=X!O1F!8J 0lՎl;U9+W_np/i(#PA"&wwwY [܏giJYUb{;5WI G1T ?)G?Db'٬ӓ ~; ޙI RLJ[/|/֨BRKcm!pVwwPps{]."3N 1)-'B0ԙ H׌MCl6L1gRQ/s'DSy5& +rsfFKClG7 dzl& rjQgW>2a))gzdmFCƽ.'4{ޛ2Cjst|(ԯzSWzȜ?ήV߹VM~,>opYᬲT_%헛:/n.9LzLɜy|=評ї?8y4egߍi&VJoOumXnE2˃CEK13SK-gn+]fE3%ʻL_Pqe`^F/9-b3p= MK8j{/ldǟnCJ0+}a?hQ.6 O9CUFH=676ԄE sq+?=8ƛx 't(Do|znd?f|վ_6կ9-{܀S*6SR tcH(uxeN RpEXvH=4V~+_\YIJ4`n,7'`k p*^}uW & ־qj?tzthsGpÙ5Uw{mr[8^.H,>QSi[Bfv/}BjRqU ܇B߾bPtPkqWCPjX%TխOպҦfVv:BV*yQ"jb.79sUV-ēao6+N˥145A03%t@Ŕ:*FH'NKgWmTt[Oɝ3>$2f5Gc82c[ FY9nM.?11`Hhu%ع8҆!& C^/Nt"[ >Fbp&`a=H;VẠEs<5Kȼk RE. b%T9ED<"PC^I 3w+Ch!(s\~5*أkN&6o0L$vr Ԃsհ|^6eX_wr6l{ Yj,>=x;|dχ|⛒JO_wU>p_>4oD[Q)vˣ7'63w>lPRS^Κ[` 0v8GC͏"N]?'QZ|pǵߙ՛Wdl_-~<%YI̯cNfNs%-?o`9]JE~O WR''w$mJs ???j_{svkS0]qڏg݇IL_Rϟ@Xt*o˴]<) |dut 5Y\THT O՜CfM~ȌUSv9˶xrľ"Z\ y}b0WJ`X-ךOHG#~u_xT~YB80n"uf~Z%tk[h?:+{_Pϖ'Wnkz'#.1Gv~QJ7<&U:[6hk嬁G%ͅSH?u06DFX1fB 1ǾYrRtY! #x]VBd|ROb W|({MZAPJсJ'kEO~ Uuܗ9+.n*8\T RT^d]Hn 5&B\d ,eǶk\vN)gȽ67"('ߞ@^Bap"E'^Qy#'@ӯ2Ci϶S^&/ *'Z@g^Y!9=mz=03C 4r4cfW2hWփk!80^/4Ʌ=?-Yi;-\sgL 8RJD)9yQG-dAؑH pv.2ىj )] 8r;]Jw nfeԫq+oo`; ltHU"(qY];j*]399W>z^&XkvQE 6l #ɰ猂2X+Tc(y 9 ;^}9LKBƵ2rQwC9D3m\L#qR,׷d^k&_P`͗f{txa-& G,O#'çDzs s.a<=ILӖR w8ݣnnZ,uuzZWLfW0ۖ2\ ɥK'x2хnBU_&X7T'][0YaE(PʼX mQq ˭HKDpn:E塎: \-{|`(`M+~rN~0T%닿)L;tIFaP=Y~b]ژTW/>_DΝ%HQV!CUNGWlQ$UE#_Hb򌟸M@}ߞjɃRh\&2/x/U2TG;e1\.Q{V )X'նJQm;xsQf̤U1Z ee'8&eRI=^&xYU5Rlʛ#Q;g rj2id!>0rFjئatuP-Cr-gl9Ym^"-{A]aS ]˝h̰וl}6mCR}IXZ<Ϩ* 1"7°3l#J,F:YfU ZRlAWXäA|mLpޣwLI$l5(eF:X9d]گ%VEfJJDTj8Q2"}wXN3֊  \QpJ:gؓH 7ag$EO]tFh۵^ \e2XbM- JsINDX!NrJ9,5͐sL}<7qj^m;!DF1q} \VA VsGT{'!0Rt]Cu4{STuOtWkN$쩆-Mgg~Ѣj? x/"՗+6!&/ `;^0PAY0;ECŖ1/mvʅNp 4a4 f1$+{c?'laR";H3fN"1J'|S}#)+q Qkrs]Ehr9--K>9/7QL|wޓ6r$+B,vg#Ona16!)RbU4-WFFDƑŒa:xetFﺇstU861:3kOf>.9 2Z[s,;͂MO~ݟGۅ8J4Ӡ/o> XƢɭܕ(~yp}8&iJOq}Zp1B*͇d~%%~s( 1Qovl)d&'  ]~SB5Beծ[i#:`&xBnM*-"BM$P2S~ړi?w+ح Hsˍ{@*Kh0,yZ 30.>ø34 Z;mj:+KKG~/95$Db2.,c:YB]_)K˷h dxβ8 xA(L,PލFq[1SQ3׆2k5:J"6?k_.hΑra7 j)ӜI6w{^S1 A0E X3nf!0ཎϟ:GJҒ(lG҂WS9($jpiU1yY^wIىo*B:ցχPɼw>k`FABGdVMDS3LB#IEi.:W)!ҷհUcףU%UE<1"7 mI=C1" 3w ˠ<6"IIBv zܦ$#8EiEL9b(&Z"[Qy9<{0@*h64tT~7VuPo'FF.*=Qsz6*?| rM1GwYE$u.[!Wz6hG#rIgrOd tmurw iڊ>pZ2r^  mOcv \1wr1w"X0K`&J2̐s7Y)X]БԖ r`#CKg뱑TJr(S]T#BEq#6͗>p5s&aءmǍM'ߚaa*g{9{BFyt\\J_L=p$=ߝ JW۵m%}JоëCP{yNyȲ%A*\A6`g&;-ec난piFFqhdnw\#ڧO]kaAWbGL3+`L3>H3igS:'۷0vdХ`FdE'6hGRF!DNpnaWyˆII~r?S}SqFic _r;44mm;_de7@Վ5"%8WLJ}O7L[k|Yٓ^?%@$w~q͘ʞ _Đ0i=bIh.HTh\ ߮9v4? e %ZCU:}B鈰BK4c $AZ/c^1b,g>F9cW .KVOwW )ԪFs0,.I8'Bl$#dDe VY4gK_+^R?MS^Z@]ozAzJ5&rbeDJeDo(+gH9U=M|<$//r D`CzYUt %1-1ԇp0nS/ݧFi_Q/K۟TIJr݇uѽPlCWjpc?t.>]_J<<U-P j1fP3jẉl,4MuSV|65J+c7'<RwCn399= Ucy4AF1(5׍YyÌYgOoFm6|Ee'8cÎ6ooy9ydew}F[Zc=[H {i4f ۉH+b+)Y'cХ=s)9J2Dp.j9bZEBc&(z2m9_qُe?qiyAfGʻ rF*@J&)'gU 9ǜw|Q1_5e*Ewsʓ[(U?ד³4|\_3A>NOVdPԨ~ԜYyL1Y04F& 1䠐)JXbҗ4u% Ohh泊|v&;V!>KQ<--i!jZg˝/݃]g=7ܾ /yQ$k[,]ǬBS>YuռOVÛ5_KV3{m-/8k>s-%͞gM5bԭn<{zl\F/qԖIY=tQߦOkbC׾]ʃR²[enIQI1fÑ9b4#;kİ :cDΔ; &. *ԥ[{WJ:)e t >y|* dk@T%BQ T4@B._jJJwWVRI1PqBJ}80<g9( jՙ`ˢ ^D XSSuӻWǨXEm#DRވSJ_lj VYRzR ?T4 λu7z| ]'İɷ[s5ӤĀվ-+ڇN詮A_j6M rEǖ*?g,\46'"j.y~aSLmP.Q8ojt~b<MAAx* (`hL6cqK|iMaBsb7;,'l~pꐏtu{:{AO@Mh͋WY_ʫ#Z1s]h;& NpFc"kkZi<$'ݧ|M)Y师师师)͢f8j/Vl1& u R4,Xk(:t}lTǒi^Ujjp5unDx%VjGޯA3%\{J`ɘsG1୍p-(E>Qr`VoEDkބVRee-P3e295/`1TZ%K{ڈDS_K)Rˣ֗^YM$֍6S>܎K:1r1+K :/Onob~\,#UЏ񳏋>0;wEw*qu 4_= ~7~zDZd(|'W)]bjI}8)F/gF #dzzTaԁ͜??\_ZuP8<[Ϡ;GŇcŢ1%Ǝ)*.gQAtq'=ߐp7! "X>}Yq3Ueesyn)?x[6t`iuw8E}a[$f;D۶[ީ?A$Dwq~u5NhH=WĴgR3 #QMK¡],ud;e#Z󺵱ԡ50wm#hي7h>i{7x%|(HT%q(<+uFt΄)h%蜧qh^ FSƥqrEU̕F }e!s#M0)KBFQQSlLIem5d k^{t%,t#1iO}>Rן"q&j]+՟΄0qV^LIe?UJ2%zL\*WJ-zfz#n7ȻnQA cZ4Ez){WC ?-E˜^]jks*Hqxj",j+:AY^SՎ4Ñ&11%LR`9I9R$mP$$ !ә[](s) SG֣n9i c|,І5 <}@G$bˋ'ΘoJN$"Eu &\a펞Swq>k,o1 xR`0 2s/SB-@N;t*:Rھh$y I:iVTSEelng0fm\lF̶XPSI1o{Cԩ5`J`d_W"X@b3ZB2yR:M]Aȼ6ЃIahOD:9Hg#`! J$2ϻ.h%5-Q'~xe!5ۣn}\)0 QP XRx3y .>fiikM syf*P*DT4@1 ܜRڏaB܉A1.a}ԃuGl~B|SK_gV=9s K_H0okr8]+&g_pt4NndiVEd9Ɲ,c =V5T:5?_:AElVtH>4.eԳ_s12^7ZtdŒ[ڣ7K T~K&vE6+Ywh-W&C'rXf5sj V3#fd8XvIfo$$@rS4OzV3NW2+E)E畉 pBx$6rI3Ƴ4a[=~/>C^ަeOIm7:_7s;IK"TPWZl#%ǧ(6noB3v|~IB=$=NzúJBpm%VqT8Ԛ&Qپ<^=_kv~_gePɅ5&hΑXxYO5ng7dX_ {:(/ WEWCEO<_YY_1RȌ;C 3ApH@K)u0 |ˎ~X@k Es$&Je4r)"#aF!DN0yJ3.Ԟ}vMY,.ᢲKJ +x,cJ F8{|#`3Q%ˊ'B(OL͖H9 m 6NnI[V1w$NdD"2z`4g~+q@p3L w:{cg}M}7f.Bݪjו,k )ϪY r| L <2}oܻ9@Ae{0nq:X{lo|h# +hZ)@X[hؾV>>C -[0M'~3 i1ɔYH$@$bm/ze0سiSu,zS/N1-IZb_mU`s(+Y!SNv_0tv޹ 7#c,vTD ,HtL%ݾhPxM+J w`'tlt)/ZХ3 dr6t*Z2qX4`+t:vm3IX$(%2 1&er! N6c=(lvo4-B*@aӲ:U ʁevPF~@3M$:B6aHKT4Sf9?JZ,Y22)Hj#:6܊1 ⨂Ǥ~^GQбw)4_6,Y__r3ZA,c2jΔnܫOv^6٪\-K3T ;w)9ֆa_.b!k>x!7v]4bEzn yZ%z\zV*}TBTѾ9l9<3"QgMiYBa+G.O];½2\b,:!C$vyF^L&kO 6iyp. 7 %hoi] u5 MzQш)KJt0e#S"cU%6izCfTCeFsR _s>1ޗDsiֱ|ouϳRcN~qpj9kHhIF󐵁fIH:Jʬci>qOKm-}}nVʷmaX)ݬZVnhݎY1fzVOK-A)m`Y)؆M uVOK VzV*Y7+bb7\RѡE7n/el><5;HGѝU&qo3!qՊݿ+XЪOfiՁ(9F8G?fҏ'bDkť"mKʂ`e=z&\lk=I&LIbVNO yp,obcb3I\QR`#QMK-NZss2Q-@uyii4Ÿx#Wӳ<#Wʻ\kꁑS}k0vlm9"ۆX _nq<-mHFnHW"};zCG\/ӯF$g6 LQ -$.PdR.|rKT/琓q<_xKEk{f# }/w_d my4].^POoS46^l3O[F/}ڒi. 璿 gX\bfJDzrբlO!额'V I0q9+Yr9&B#S鏹9Bk1a g~0?Ȝ).c>_U&"νgTw77'"PK)>LuM+8*,TíU@lK.WW 7Z^8SZPC"ڬoϚu wM2(0w=vYK6۳ѓ>k-euy(!e2V-NU:48)cO" F(ɴan2~­P,hRiXIFdĨB|`I6dMhAGl|5a)hBtQ?ϣ~G,Nldc挳va`q-W\  {m~~FNyK 6ҵɪ Cdre PN<" 8\ zi CX`90W G&E(H'y`!FfcDQ@j5DнϣϣϣSQo@JqU96VUW,c5BBpTpsJ* ֎:ɟ/ATPgX|7rUfPUfƶ*VV=X(fϬs2+%+"hƵqHq3f xyYm5ʼdlF(N3jD^%`a&Z-C(|&¼58+Ñ!\acu%3 RP؊{WKطY9(QæȍH`uBq^W"TBJx5f#tyk,F+gojO款\}svwћ^V4P "r $r{HEԀѩP )֋$'ucS 7S;ٌWra"~u=齃_vi@E ߪOUo < XG|{g2qz[Ѓ^͛3":$KD cq)'KPͷXD!Mqu+AꔎƺC&50ny1֭ U@8"5X7N>Vʃ)u;`$2M{n9֭ UN3eqAoAoM{?\[F BIѡӷZWKՐqکpXiE8׭T.Ft::#MP(A2FYZǭ3{"Qlv1ҁ Q!;z pI`D!%e8C"HUq^%&WDx'fG}-nys|{|Ww!_+իUqѺ 3,dz0 ‡۸WQ:LkM *$^PwHUsB{u/˾&``UYi<S SV9Sn`ד͗. DrS2 Sқ]V,Jw3yc_ cXǺ8(8NZFT%ldb>Zf+DgS'_bqDcY~~Rng v *2K?!X<)/yq*Is/EIR1z:R3Mq]vٻ殺n8u|nFw <`u60!CpXt Ֆ;F&9;Ȃq@ٞ5ϜuĐm]- *OvS(6@%Tzqv!V\RTs%^ʸ 0{D+ĚXSDžL:)V)`CHilE%Y-#W$N@ {VD7hG*{guū$$ڋz;Q$[ciYZ}' "\_/>2\ѣPjhWɥh15DjO"p b( oq 4-nB㽩9F9|o]{&֟w$A)!ALzjUSU2m>wz?lAl޷VEA8%2 V8Dē%=@8b0 Hkjo`Y7%)9^Ϗ#,H;lyTt'Br/|^?YMۨ׷0XzϝMe_]xSџMog<$ܚ%l0\,%!ŋOϧwk3Ǚ],ˈЂ tDz%,=puk=JKGTmIlxo Σ58 Iav[Kdq|8Ylv/9l6;۳&bK4~Qe Eub*a:~壩*i|j+Z:c7xu"{AR21l(k9ߴ+86Dt~٦bqW :egF[@ޥ܅f "MWj^7ڠ"CTRIuwԔnqr_DN#;YԑVb+ ^:$*q֯$჎v%XPJ<"nDKڡf[zP8lu0?f~|V4[c;LmK˙"kh_ Vם4{5)`| UEh;@J'=?vdRE|6 DMg}}$X}OlIEF %3 Q s>;ݗn.BP [OoIWdOQ rƎObmꃭf)l>z'qfq}}ĎѕϫN!yuTn}[G>f)=Sv6HӐ: p&c:m5X%h{{ .-CzSMnGUݪ_~ ?k)+t ʲHuQ@@ oI,B ^ _QWC^x*Uq{^0׮e !HKBE!w ܋_eBӸ%r-w%kJ*k%HabpXҸ*lrd]2*BȰEu^0زՑң1GcG!KhUhG#fU][j7%=j][,wmmݨM= {?)3fh90Ѫ]=/%jљ;N1XǜEQs+އa "lљ;iN1Od[ M]/OX|o%i =r1tO[84̳B'FUp@eѐꢎ 5rmt,8UU\V"J$RPNՋࢣ^:rw8RnFKV<݀Tɝ/I\*d0GgdV9-B磭K~H{h GT&.FkAn[R2iR8NE,ilDl.V#Hy>1AX!+–N a5FiJ/ŹmG)D\bfQcZϊU&,qm7o/95jBOnFs+uneᆌQ҇;iPcOrtɶLD t"`Y3NYV G(AEhс`~G$8Fȉ%*, ȾcEZu}8q0 ;5PФVKJOfJOG!KAsUjaW.m𝄬Tzb'5xetxZ6r*.uWrgQfHM"1ǾWG Jm r4$n[?'*E( Q S̊/[ej5 ?&8 LLVF}!#D.6uQ$WzjUjZV^5A'+~nh;&<Ï_u EO~+.~ٓdm"Yc;^2[J(:-EYR }DjԊR=@ʑW7!-'w&ʖQxL(QW&HvB*%Y:XeSA l:Q=XT_mStYx֙iv$sKUUe I*M3' KX(Upuu Ю**šy"Ӹ%![ڦڈ@eng|&T=jje8o+EȳR7}k!6hysRw*XKuPJ\sRk3ӠM?ut7j5Lw":3z:j)[X3`4,'}7i\ `ue'#dᙝyr@!?Y \eS -¢B SEQydzleU缮i v*@EGtD[5覗nm5;O7~\mѤѿ<vV׷]X4&6}^'n }\ n :ZƚϚ 8M(Q]2V΢^$>qk*P߼6QbDHSQxc66/w.엛jNaV/Ȧ$\z/+*ßoojuYo*+dȐiyZzndҸ3( #Y، ֱE*C]* ]ڙ>zIw-k$^Tf_HWtiJ9"ŹmG&SF7)ٳ糂;6cG݁/r$)&2_ٰ55oi\ZZWdkQ’:dk$Bptiڦ::/i}o8W{/ݿ&Vs+FG>XQ[LR)Z8&>OK0eL |>P/T]+'@ʕ&x-9O 6!3\"1b?D6dZˎ;yVtHYX!Fc.VzV~sR<=5Tkx3RyAakt60WT[w/VzV*3T.UhzPD>rXIQ@b9K'罐UM-s||ߢZb23``sWTv:ǹ?k fTe/Q<]{zJ+̓6/ky[S\:#T p>h +/oGY|igz轚Wgz x)q޻ ehh~Ex[~ټ@Hoth͝Ů ᢧ1 g5!F0|gȚDNiّ #-vbB?pQwGo'a )4#J]X +3BAm|󲕭>p+i3\~:"űҍuݙdtrKf,d, ,5,K?|J PuU2J$z*.,Ȕ#,eSYD>H6"ds裌Kki4QA5)UwC-Hm*Rj*fldܮ=pvS繒mBF_0SL5?-ǥgH&˞{s{Dn]^e5^6kp.^vE(xNR 줓 ^2Mq g KM&fIKD4Ҏ\@d01 ;̔d5~z~ڦZq֙AYX)`6TwҳReT5 f*g ՚e-=o+]E#T_mSd/l.bԭ.+1}:+uxYx.JeʥC׻\Uj`)T֘.!5TP34e$E4[TLl|-+=:D6F\zg'fa`򬴡Z][ryVf?;L6՚wV!JuGcאg >^T >e-%+QޛֵQj$<c>g@> SPMHg*O40!`&7u J)<L~b0d:͟ [& =V@cle:c&0;tutgk&~Ve8h b,ow2f{`Pl=,=zm.P(!Zb;3蜷b&Aa ~$KeQR`6 﫢5JA{Bqi2u=ÝҤ6'S}"ߟSjpެz˟m q:#yj[ ~?in/\seD% .ۈjkW<.{Y_iGekGeOM$Q\增CԎ yM]NH7.X<.]/ݠ(v~Z$c挮p9jW8R|m3!BT޺ė2N êUPŢY@mbmy{ӯE񎭮{?(ǵ:iNku{_? ݼ/b}&zgA-@_%x჏eHyqmF/ܦ]#׋~~.ac6_<4\<6P7[lZrVӜܬL*r6TXb@F!kcj פOzȑЌXoR)*G][O*. ୭ĚmQJ*2-XB=U&[jEd;t-ʍsJ f™Zr=yϕOe;3y_[M|"$f=KD {e6t·#Qo{iZOhH~z_XxAfuѷ)Xa>1UwRgK$rbǓ%*Z!-kkU/A{e]{ҵ :`@JC!yU@h7{!ԭjH/5|5zԽ`[\{0w3NWq sɪqOS:"YeeU KM]Yo;r+fosYp1ȼ݁bFIv&s>L6{2c]UdW6 N\ `#7EReO5FP|)\} )0Y̓;JdC,ƕ FHtUlTm^8ywLLZ+j #uΕYb!|3*ʫ0F?|/ÇӻÇ\`wpÖ5JA<˜tf*S8jR h5T殆9 bi>+Vڣ8`F @Lx"eFJPrJ-V†x󆓁'dVf(XzB&OwI^p FKFl3`d.y-nbhB*4TK+yǬEpA4y5Ea=:#s髎z鈲i U7tc-h_I0Pν;jMe>xT-Ni d8 6Pڂ@ل1Jh gϵ>nTV9Do.6Ӷ7wm߿i[6<raL.v _|%tgdd&+r4Ck';!h΋ ';/l;hvp.A4 +e;t Gq bls|oR֕/3˾wt^n;n˻?6GMru۳-n|kNóآD2!Z-T&svT bBo-'M1QO0c4^rN 7n`΂*1[6T 'ZR(rRz6M{xb,sçK~phˀ&`o\*(;X6MC_ܶu( E"AT d0hT!p>٤hQȌsu&c,0xIgˉxO6<T,NadSQʞC𪛂zSk=v,!SA9I]J9Z1WS Dx`2n?_..v= iӘm!o{۠By}ϛK<_ ۻ!Zywxچx5y׾<\ǻ͏($_Wmu3n6j bp/ }6wmak=xQ嶀i y&ZcSԝػ]ޭV9S6cT{3MƦGG S(?wA~Ļ͘Nt[=MwB^)4;[9E&)29o89EyÙyڽ WAUØ\5xR33s<7kʇi8nzq>ƻeX}O]|qDM{Q.[l;T-+TZG6^\zͿ;ͧwokF;yOkftWb͊sIT6wuVu}ɂTn'^mﵛ jw 3t'߼#œfL}] נAam }PǛ"J"9A*᫸3? &GGw 4$ x *l=9!U;qjNrΕ2 iS=9kvlLfZ`bLқ ;W/>x)8׷ X +^!fxd`*|{iv*s>Se2-=OV9aуٲri*D9[ڔHk™k/;lq)ӽ^sis^#S>Bn Ii}:%Ze$ږzF3wq`t&S 0d50J522T2<-&I9D&bk %&TV8 ᚷ{\蓒E&؜+eOόEyhC@B̰l[-,WL$of8-!hld;xH!UIOxĆD P=u eX |yÁ`I2ÂD)XcAi6H.{%SᘬSLȐrNSI#r%r!G$;h)N<ڹ{NGX# dؼG:5[-;oVZ(F\OQ]dwAa郠30l^(YQoFIgͧC1#?3CfZ9j&8-yif*(׌ܰQ>QGWZ?Z r)B"٠ _aqkIRSK<{@ y8EnAj>a6Yzx2mdA!׊b Š|:D3#F /7E2z^M:kh\bқσl I/*`à a_H:znÈ\4FjzpIxNhO ڟ6s c"0#!؝W~S~ bW+ͿPj$excG%d I PT<&6!2D˔\7z$F)F-u.&^M㠡G8kOynwR!D5B^׃bZhj,|Zfd2eh9j"&cd8gA[k1OChHWa!kȨ$f(b: ,6K&YrA"Kܿl }A/Daqb6 aB828S ̲pP[:ÕGY'OOd 8YQ :ߕ3 >w*!g áO eطZ5Td!AֱOS-9SraT%t(xeS[6_,T{{ڟvŶoTb[,k.?S`pI#q/T#ψS+o`:5!է%;CX 7 <{77'k 5+Q:_BD:y\,䅛hMپ&iGmYcNJdAkV~uět*tٹX 76s*Ћq9A/W ?wFdNOw<"oJN Wᧆ#~*ďbp ݏoꋈ~fw[_e>r ԕ B85_B<P}K2 %ZYyK5C|C$j~t`QqlfG/sά5Tٞw<&.[n)Y_Vo*z~s$\27憜VtAc);_"8X_QVulg=sku(zsq٘-SϬ˪IQ?o~Fmz?túq+ ]mMQĮ?f#xkꯑ4p&1]_73iym%2ÁhU(JUs@ê Z ؖi^cmӜi&d PfNMttE+|!;JMY7QgM#[: t#t@ǠZZR#~g ^dL#(~pZ}rE&$X~8=OG&Xr2O 'ӕ(}dR'5s97G&A\0qԙ1$ 3x˃G7/Rt2ġ\MpH] PN'/}.Z!9UUEY#l]3Lrt@:.1HeQXUB/kO5TEnm ()لf`zߥJvW;i =n~0-!w}5@h[}ąviS`D+=:ZqbTF1v ǂf:{Ttw 4(QضDÙ-(VZ*&:tbuAEtVlVH3 jزbGDhY_+="sG݁/Ǵʐ_Pk(Vi\CNbiF#o=X..P,υ=[qF?: 1j/ nv }qP,YΟt ؆ b%$ 5Ό *(WY!2-v_5DZOZ~-i3?hy9cdV_S_xԻHeOi"Gxao_UHу ʷ5.qZa*9\vUcWJ{VHCڅ0g&PaS}j1|1gm)$Aj͵'/Q}MV)rE(LHIhKTlS.&es3k垶T[Bs4&\µǫqO@< $>$Rm:ۥ}Ե27[T}K:6MK@cjKTlSͶU⢥稥nTIhKT<ڙ!yk)qV-1%o;EK[K9іi4lv]Ӂ!W"9|xUQo^X]}[>,nW?7(sd_fr/uV/>{5 #ZQ(|5Ll}7yYժUK})^{<WNyt+YLl ٯȤԆP]Cǔ:3TWTddpm)0T rR2bO@W2Wڭt;F!8^k]z@˱as9F^|\8W3l%ؤO`i*<\vʁ+<\ܰju٦ڂ\ A$Z4xtEvNO*s_W7;+#~!(CT){fY_OiYo*QQqb݄V*wE.5kB\ RU,CZKXͭTU+DWJ"j!>vi֊ވS鹀J^@M] !T " T!r ۆ#VKyj)f SZJd6Z60 >٬v˩۴j)x"g&VKYM% ; nSx!w(xw 0ۻ &qIqӛ ﻊӒmG7c(D՟{ƒtVʥ!M /GEnEMW.˥^KI3r)IKY1JN3*VIXR&],]ʥHbAiqp脍e'VyKuQs`z3G'+_ߟct  \`H yR*] ɳ.䨪ʗ˥\\C1[ :a.`anC!/g}V\">֜aXSmژR8CxzZ7TobK[K9aX%R6iZPm۽.Zz~Zus3}+$8GFPn򜰬L38ΦQڕ~SJŢ=ʵjHuy2^fj^[Ҵ\*h~L- ,pS[KQi)J҆jr|n=Q[14-[SE/ZzZʔLdªҗ٦ _rXKTs3`E.CGe*[ՋoF]%h]я+3C\#Pͪ5 sYgX[o5|$'.3admgF\[ݱoxrɷݘ~>bru[:+V4Ak'YXMu&uVeQl;^{\-cz ؘ_;[en 9v N٣;_H0D&`4E'AKx-&a"i W!kJUEX8_(}]xM((G=ct(F SW=mTҦҦKAU xȽ+s U)!r4;c X/.0`&߭K!u@[h>Ť%p!nT$ƃk%CRL"ŹNϔLϾ{1Ё6ޘ{8㘿7N]Jj4tN* i6Fa(D-I3 HL׫|z9>^ L|_ʛwiq|pr1ͨA{OX:$u[-8G6YMl'KL]~yv<Ё`u% 'X@'WyF `b QZTFyY` 9\S]1~)E4e])D40ShU 7TT%zKƱbIh)B6TTmܕ¶ܼQщ@>Bn!͗ PUs,LUR\Q{kD9(yI<:jsTXC`JqQ}Zbs6k0\WfMxQ{;OUDQioO(IaMKoD.UywzzwfyI%ӌbWG;tt;$ȶXVml;a݄()bޅyGhс`2Vdy6Ȉne\injٌ3X-W}Uس_Ftѭ9{g= ݵD?f*z/G0-w&Vn!Տ.nٗYEgh~!uIL5 oW0Dy|!qjOv߳@뭫Um\{Х־i| MH7lZЩ/fw+_W>*Jkˢ*0$Η"l-t^QYU-_qx[s5bFYv-WW)Bqjiq?E,üX~{XߪeZ?.߮߿LX'?V/]-'s~%ga'4F9(A+I󧆽lpΑZ dy7ԴOiUsP eo:RBp8 Jǣ͝` 3rÈz< h"md ?*鑋LGpzЍLX%44u:Nģ50(?Oک2< !N" /Q-0yǫwu:Oy];0(95Giշ> 27n>O7@p;_" aOu][r+ 'ȌɪMrrH)7i%Ϯ,!{s!6 ZY_"뫪W! %)FWv|t{>&[{F)S”nQGiѡ ,] .@rf /sKx7lSͫ/edB3oJKFM|cXqE˂"6$x)S deN``8HKbY;`EBcL8e;~rIL 3eO3eL=SRz0e4*:οg̺Xj2T-bsq' '?\)*L(HƔ94`Y8̔Q°2e]cʔa/SesZ&cԬ~w_nrڅuL1z߿ T`%u2!GU*6W;ŨucpDovrGLlzfl2!z ӱOo\U9\]H,Y_B{6hx8'DFQ𝽷ǓKwՕu=זW3ťs-/m:#+gAEJ02Sݼ,RI&C*5S(WC;U=u>!_Gʉi,T^2ޅrD. BGaR[/ʀц#<'D(:g Hh<0Cl%*J].Xli0cH$H wE>>@ ShQ: \cXF dA;0BՒ Ny5{Z)#R#oTCyKsoQB4̍Y\E'»省+KJlyP C>fۼҿ痾{L,7r F񇫤0 y)ϓ _Yg.tWEq$(<@:6FG䴊H1UPƀZ\l"]2&OlvFs!/*}F[nG"SMz-菭` PcpuhX/e>j|Ԃ,9G.rXbrIHS9v.ʌQ[K!6' b!͖̂MuB.QO^wI8AC}"pÇ>\ԣ3je1bS8mZ?r*4䙭.]E"e ]ŹQAgw&:,BpkU'i gdG-c'/jh~Q^=^v(3GTWy(isXX RKQ\>\)p^zwwׁ/e'Ӆ9@W<yF -^ADހs'Os3I~`O>>ǵCVuסl% Ҥ烃Նcf"ѣ 4 LdtJfTmWqЪtx@PQ&SM:%NZ0@v&{Ct-Z2IL^ɱAHho]#e'vuOp:Pf̓\y+Aعnhz 7KaVA 4>LY0mT O^/}.yQcɣ3&oeTSmϩm:3%Ηd௟]1J ~FFAzxUMrn-`0&EpY=!-bw G ΄.*X$\3a06@iGYVZ Ѽ;0i 5K*8PW*{ lڹЭ")~!r"6#'bd7J / I'd 'bao",+Ȗ)@z3S._qSU%ީx3:2fNo7Te? 3O-#vW~HНc4B)k8-#aM>pt>Vba0.&x=>[fVwOgrJ:2 DtS3yh T M0?j9x*?.;%K`EGtD$ QMܹL7H͕:g맭+ӎQ'tǺK6}9Ou=o޲v',&m\p{bY??y~} nuyDB{F<90> 3 gq :u)߭޲oZ}n[~ZtOU>ki2]ͫaz͵ '4_J HR*Л%64͋τKQJJp$p&l'LFde !!8 q,Ev_ALo8@JUp5 צ]s`v=2_h⢬`k Ge2y_m 5W%%'F G ޗLceqZA-uw\ |l36Qn(4e~QKZdENɷS> °=E2!mG5e^. 09g²^7X7rrBT Q QeUp)!j=QQu5`8|u0_U,ʘO~Qt p!z7P8J3iHEX(1rzB,R˞F/S!$S;W ЊS *Mtjb/ÕuAu9ve)p&!^v2m%g5ug O̦qo5`eQHmUc$&ZtJ)e(A12^:rru)}@_"6 !jE/]AgsFJJe\a#YƄ8 1*P-f|$SuؤoZF>U1p(+A$|эZ BgaK_ܞC@Ԇg_A'pIJ]x.EiނVGig1<`hҿL@g-AuZiaOev@ʙ@L E&Sp빣PR"+$P˝eRԆ(%f%2#ɧ~^Rҙu'3#B vM3K:TaP@5_|:fٟnvrHON4nu/C.t{ˆo/ՏqyPqAi[͂A`x*0~nƂ=N.sޡ"f{j_@H9^ j6Sbju3Eu0Y/ gK3^mrRze !/Ы&R*ބKwX!Is/y.!рÈ2N{H!ZF V*G@ rmW=C Pyu`\!Y0!&HǣF\!l0-\͢;0rHe}a' luoo߽i3ľ޾{pW@#'rq>< (:\w \gX H {َq5{So=Mȝp;ey.M{KZ./J!3{̹ŀY;`ȓmdU~ryQI+LGd>YkRP-^W b!EuN%6b*]RSnC ?ۙ}l>$[g{R4֩ȫnG[zT:/>D?B- aK҆ϸ;%U@Rl1]*_-9@)Lie$LpT6I$K͋ݤuww+P.ISڽ#BNZDɎesvC`r'hChw쉪`Eҧ'LUH>TU}xmq#lkR^K}zv-Hd+4J dg1au5c+APzsnӁZs[վ|{lRfF\ҏ/-M*Ŗ+٘CcmG;-?D BۆeQ yU!hN†IlB+oF"PNۅ7@9yuЍdSÕaȎ2,Ϊ@5zm(nL@%@68T) RzcƶLРCgbRF)n$B/ zcqTՑ}hC}ףE( ȗqDzw|mA؜q;K9Ѻ ?+[ lp2x?>]n7U܈/>iiuWa1&o :8F\haejU!(ye=7,B~&ɦj==nXѻuAt}Fﶿ6괖lۻu'z>,7nlJgϽ[Cn]eb:]Ļqt*vݺ7In}XoD;ٔlu-ό;hOuLs#; Z:eDqb1*ݕˣN>g G}C4]>x6>OO.ţOM+ғų0P랙OZ~;l`'njnkDwaԿG7s7uNs1lW2ɤxSڻxW?[Dt/.+<g㿆~q>;=9 '$GOc!I n{_n|Eo?2P9u~3ҽg>+ng테J>B}C6V~cAkvKy2M(!6vaUi̩RBmwX{?ռJfƻPM]$9xDJǍ w,D'>VBTQ.Qc,rmB$BVNDieT*e2否 .9Ԫ.{[:`U0Ek|-U#?ʾrvA76+@A,7*X0!Ka)S_ŷO$ܹ_JZMK3&$ ^hO #)P *N6|+zG7HI1@5PMjHj^vlԅ%MڎBK}I̔ _e[O'|KӺY>9N6LBس mwxV=)P8!q큍ۋ?}M-EG_/b[UǷN:b]%wjKBO.bIe-~v8Ů0 a8} ?WߏޱV~\.>-n>F|B[!&,(ٳ-u_j,b~kRY/P/lMEP?Dj$*ܠs"QA[@1h AX^X7QLCM]~vu?HjՖݩW` jcQAgA>B 3ԧXAj.[;'v8ȿZSخӓbJU q$i&:0V91JLȆL9X,mEfz: 9mJEjo>aG|;p?ͫf?MZB^w)9؜A]*U!Lنg<*|j} Sv]VYo_QZ}۴mQGtĔ?74Vt|I7a- |&L'I|D_sdʀa+\HP SmH<(Hzຠ6h|MW$& owy}$we[6̦_It=)PKR6Ly88PSZ~cYvBĐOmT;_ ɼN{iym/r6mM 1isuok\M8isEF/MiIE]wXwm$%&edE<{OQyPU >,kM~7{ʮfbrOdpQ8D l:϶ V|Iyג`HqIi$Z*@gqVdN2v:k&ni"jP2TsaXvhEoDŽfm綞%WVytCR4 %W!G+=D+Ef(K+J_|jVzVߨBR4qV2T ^8ZAZ)KR6њ@0O@=C%R8LP+U!Ӟ`H`@ VZa[)iVE #~U[)=_y]J;6+] ꗥFMu}`֌;1]qZƃvFrLkG{0@p=bLhN A:QL2Ր V1 ӐhIjO,H©2Rg 1\ȵxنglF?[H*/.:>-672%BVT ޅ~eb],Qqj7ފB " (08IH.q+!(s)$9jc]H d^F>9x$:BYT JYiI5g8ᰭA EIa Ҭ:e=l+8Of8+}U9LjVur]ZWB*ըk|VzxVJ}Ъ_<&+b TB(B 6נ a(eA=j+f%ը3VzVfX/BR4+-ya[iH.NJaV|Z(zlqV  g o$W/6jH-G"Ha=ZZcdQsZAg@R~VC&=>Z4 X0I΀;@0ȺwcZq',X 1;j_$a-pNit7  a~?51|Z0g]ץM'H+X0峈rU" Q0A7"rn=r疇+sk:HQ]D*C0zH8{p衊.衅k*"Wh1~caC'P *س [4PH=TiKE՚VJ6=ht)W X ,XgPZ\>6 *zlw/BĉdB=+2 Uڎȕ[G0b2z`Js Sũ^L4BN#X.&&LH`9cβ+RV":sך9,v`R٧gHMV\`Hp=w|`wmHvBb|,6d93Gd-K6oc]ժX$qE 4uu<3C9o؋`y>b8f4RIvY\X/"jMW_w=U$lOWQ;p5_ǐ;[+"fWO=ԩ&$O:wt2m/ֿ.b~WK'q(Zl^hn0$T{hY x[݂Kk VU]}.ؑow|1ܖEgEZZE\3޼b N䛷x,q8k{ĩql6IuvӔE>!Ԭ)nz3P.Yl 29nV5e8A[44?ufO#J_4#V4:DtkR'j?2Z;ҿ?4o͌qgd+ĩDZRh$46UY!QaE"~Kcm@ ̙;u9{n5dC/҆g̘Y4i&dc+똬wĮ! :o]JZ3'fc #F{K&NH|}bH*2@&H\@VM@9ČS!gv_2,cH*]-BވO,󋐱GrbK@S[cp*_>f+,s[G8~ZT}ݭљ[%Jf?ڵ~Ǽ t~yS:H?jRb;wuspg("У=ߜ[48/u"OTMbZƕࡼLv};p|WQn*2Vzk0Q+A_\nQVj1oRDkk4eȌGyG'CLkΆe8 L~ˑ33+vFHﱚ <} Zn!#9BF6b)MUS{o?'fO?;G ڜvA>7v8L&Y2 VPϳ5k4q33{2A,PX.,L k>Іʾ}wXhCb1tl21Z/83c#ym6HmP!Q'r٩|p=GM_cI#Vj+:}[ P]FY]}9۞ȟmoy o~S:}8ϫ[hy} Sa}jq >߽ ?XUnS|h$Z)}wxXP]KIhsXw\DiCtИ_G78btj37JS٢61mQ~ntB(} m r(nGEDj5[t^!?S4W;=?) {;鹥FЮ֪FC^HauZԏBkW^ 0@#</ItF[_.C" >`vξho{Yve2 Qh0#-jW '`UhDNq :}{6 <%NI%JGu*HiNApAu#ftИ*;zUTV3ƹ5|2j_ ZK@Z9"I`)MI/I_ Ph(8g_Ӵ^`4 xq;/;C88ABL]|G:9q7 .)2S 2&zx@`e~p8dvЬB%5eoQ(H-]9S $Ӷ  ,6r 96${7]y#AT,^NyD ڐtuln 6~9#"`F)] uՐ2v;ٺthJ̆ Mm\翏27SN#?rqyw!>rr{D$Do_ u}|a4.tqN;TWb#Lh=ɃB,K>W(17<`m. o2{X83^ "*S%޽?v}^d@Ee;O zK-Eɼ="PQQ M4 HӓspЎԙUq~5HZn^ĵ߿ꑉ HcHi 3CYAN@ 8Y')1]4 (9i"B: 3:hLlkpߞjCCa S,2Q^cpF,a%n5vaT7V|Lz}j]U_tO/4p,B/mεA/< 6b|k]]9T4Q~un>=7/-۔) Nt~4]mꇀc2kUhB 첇WY{c5bCz6|lT1psaT3Unz!y03Z&6VG1KWԊ iT{GjP / **`4JJoHQ¸}q5wQ&@VԂku!Bc꘿W7,kVB׈1Xl63Dsጟ0~11UHcJ V`>iF5H f- iY2&><%Ӛ:D 26h1:K2)薑dnƸ{_jL#uJIg iaS`%̆6@E$| 4@0ZM9TLK9aO/ Ї&6gX==T +yns 4GK m-l8>؆)2k_9M3K䭫E$V@Pޑv J5PvsfV32BL*5,$r-1e");lٽ_QEETYC[ 8J^Lf-8=><^?9bဖq;iPqhBzY&!F/ʔW-02&/<y^MT 3ǥƨ57D0EL>! I11T4biH8:T#tVKcgbL (ea:Ϛ Zʝavs\v}j0tv(ZIJ6׶~'̓(_^,:FՊ)-Ӂ/l#*o_ V)/!TVqwx@kR!o>/|YqA"~P?t`1dfZiүÄ!G(Ҵa%@!+*0[1|ZM.,.N{a68tM]_MP=\@7󑎂鉁oԛS*G[d21S{9c2"Ct0!k* eqxW$~T?\8 bglXNtLɒ)Z29d5z?)2!N,hJCu!]ws sx5V&T0LyJ8 QNEYp`(MtXO&M]I :LQZ/9W 9M(urTF+/{[K$0^D,.XjV?2BťK ,u(F@^4t)mDTK!%oyu""kMdEDoVȉ)Z9 aY|ZtieKe!y@׿0UE< j*+u`y k_e +us/VZeXW7a}>H}Mҋ|B[yCLvi4WMnǙr%SiiSaWJ%Ҵ>Fsup&)źO~vejJ?z}jONs&+&yŇ6myNjP> rPy)&_5z:SN?`MZƚg :+j0 EZ( $V/Ω>u#gN`MX,v)X[|9üJiԨ,9nS(PV|z1 $  LAj5mմPMYN`j:B^4~}}Q)?}xQl3qz/%W FzlhS.AS>hIs˸ZGI~Z)[^}0FiFeXN$="tBEDēsWVrL58^2@?M_]_͍l ><;v0eO,ci4$g˼b.k|%gwwIc8<&/WYv2 wwkvO곳v>ך3lybܗƘw mOd$wǗ-(y gxo ˟J' |+B)20abG-u-;@dh%pzPTE;9JM{%SLf"A]KVu * CI/ˡ>[KG}zݔBJ(m=H*?B ɼ!cKݺ>hݮ=Hk;&YԹâRfw_%=gMhB@kBGBjߗHh;"MD79e]:!VA;B=X1-ua3jޱճ 5~^-_CuP/4 ʘ+N kJQ56'nqnHǼ1jŮfW3D)GxUTH5V|C׫]l[\zCsFܼ@kmm*w4#{?_mM%Z]ېl-]R|3R& dGl:CI6RPqNñ-n=f )@F8:=dNJh|=w-:Μ~YA% \E%OzwVǢrLj;3j }KRgI/*^dUȪxV < h%FK9 'RVjFLE*bTdK/餩mW7~*N'ME.7_mIGx_srVK|H-q:J s3y]NBisZ&2w,;PBFe$k \lLCWzKVYgoCX!!싡N}m eXҕ=:FLq`CrAPQ3{qGHdpJ 1/ ȦS.*GO]W8vYwз=h dCs"`Y]!/#[Z4(2}Ux%cKm&7fru(ƜOYkV` 1e#G ]stk߼˫Fg_c[rP!MA|2ިd&xN)!y ;O/ `0t!lz t`ozI KS V7 7lq@$ r1܃]e#IM P5Ke6z!%AFL41RK L{e a0qorؚZ$+ibh$"K>$+bN ]x}g HB4yxݺb.%K(1eZ!3gq'..@DdlG0yS%H9*ZtwX.+C^=ɕ"߇ bL1/f)}vd9GUi,$JA]l#!di9*Z(>{n}ytXzօ 0g 9V$]AQrH@.ɜԉ #`>2˨H0^ 6}p1)P).? Avar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002033100115134125401017657 0ustar rootrootJan 21 09:17:50 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 21 09:17:50 crc kubenswrapper[5113]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.623006 5113 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625581 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625596 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625600 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625604 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625607 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625611 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625614 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625618 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625621 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625624 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625627 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625630 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625634 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625637 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625640 5113 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625645 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625648 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625653 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625656 5113 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625665 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625671 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625675 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625679 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625683 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625687 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625691 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625695 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625699 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625703 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625707 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625711 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625714 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625718 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625722 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625726 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625745 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625749 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625753 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625757 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625769 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625773 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625777 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625781 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625785 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625789 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625793 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625798 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625801 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625806 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625809 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625813 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625816 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625819 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625822 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625825 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625828 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625831 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625835 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625838 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625842 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625846 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625850 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625853 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625857 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625864 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625869 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625874 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625881 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625887 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625892 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625897 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625904 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625908 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625912 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625917 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625921 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625925 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625929 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625933 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625936 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625939 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625943 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625946 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625949 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625952 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.625956 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626425 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626434 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626438 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626442 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626446 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626450 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626454 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626459 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626463 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626467 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626471 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626475 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626480 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626485 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626489 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626493 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626499 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626508 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626513 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626517 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626521 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626525 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626529 5113 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626533 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626537 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626541 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626545 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626550 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626554 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626558 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626562 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626566 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626570 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626574 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626578 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626582 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626586 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626590 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626594 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626598 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626603 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626608 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626612 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626616 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626620 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626626 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626631 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626636 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626640 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626646 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626651 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626655 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626659 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626663 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626667 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626671 5113 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626675 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626679 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626683 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626687 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626692 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626696 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626700 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626704 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626708 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626712 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626716 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626720 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626724 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626744 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626749 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626753 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626758 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626762 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626765 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626769 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626774 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626779 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626783 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626787 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626791 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626797 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626802 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626806 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626810 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.626814 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627198 5113 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627213 5113 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627223 5113 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627229 5113 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627235 5113 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627239 5113 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627246 5113 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627258 5113 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627263 5113 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627268 5113 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627273 5113 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627278 5113 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627283 5113 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627288 5113 flags.go:64] FLAG: --cgroup-root="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627292 5113 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627297 5113 flags.go:64] FLAG: --client-ca-file="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627301 5113 flags.go:64] FLAG: --cloud-config="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627306 5113 flags.go:64] FLAG: --cloud-provider="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627310 5113 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627315 5113 flags.go:64] FLAG: --cluster-domain="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627320 5113 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627325 5113 flags.go:64] FLAG: --config-dir="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627329 5113 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627334 5113 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627340 5113 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627354 5113 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627360 5113 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627368 5113 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627374 5113 flags.go:64] FLAG: --contention-profiling="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627379 5113 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627384 5113 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627389 5113 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627393 5113 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627399 5113 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627404 5113 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627409 5113 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627413 5113 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627418 5113 flags.go:64] FLAG: --enable-server="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627422 5113 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627429 5113 flags.go:64] FLAG: --event-burst="100" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627435 5113 flags.go:64] FLAG: --event-qps="50" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627440 5113 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627444 5113 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627449 5113 flags.go:64] FLAG: --eviction-hard="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627455 5113 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627459 5113 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627464 5113 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627468 5113 flags.go:64] FLAG: --eviction-soft="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627473 5113 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627477 5113 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627482 5113 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627487 5113 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627491 5113 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627495 5113 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627500 5113 flags.go:64] FLAG: --feature-gates="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627506 5113 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627510 5113 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627515 5113 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627520 5113 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627527 5113 flags.go:64] FLAG: --healthz-port="10248" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627532 5113 flags.go:64] FLAG: --help="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627536 5113 flags.go:64] FLAG: --hostname-override="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627541 5113 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627545 5113 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627550 5113 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627554 5113 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627559 5113 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627563 5113 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627567 5113 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627571 5113 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627576 5113 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627580 5113 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627585 5113 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627589 5113 flags.go:64] FLAG: --kube-reserved="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627593 5113 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627597 5113 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627601 5113 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627604 5113 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627607 5113 flags.go:64] FLAG: --lock-file="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627611 5113 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627615 5113 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627618 5113 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627624 5113 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627627 5113 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627630 5113 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627634 5113 flags.go:64] FLAG: --logging-format="text" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627638 5113 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627642 5113 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627646 5113 flags.go:64] FLAG: --manifest-url="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627649 5113 flags.go:64] FLAG: --manifest-url-header="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627655 5113 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627662 5113 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627667 5113 flags.go:64] FLAG: --max-pods="110" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627670 5113 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627674 5113 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627677 5113 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627681 5113 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627685 5113 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627688 5113 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627692 5113 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627701 5113 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627705 5113 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627709 5113 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627712 5113 flags.go:64] FLAG: --pod-cidr="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627716 5113 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627722 5113 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627726 5113 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627744 5113 flags.go:64] FLAG: --pods-per-core="0" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627748 5113 flags.go:64] FLAG: --port="10250" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627752 5113 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627755 5113 flags.go:64] FLAG: --provider-id="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627759 5113 flags.go:64] FLAG: --qos-reserved="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627763 5113 flags.go:64] FLAG: --read-only-port="10255" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627767 5113 flags.go:64] FLAG: --register-node="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627770 5113 flags.go:64] FLAG: --register-schedulable="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627773 5113 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627780 5113 flags.go:64] FLAG: --registry-burst="10" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627784 5113 flags.go:64] FLAG: --registry-qps="5" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627789 5113 flags.go:64] FLAG: --reserved-cpus="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627793 5113 flags.go:64] FLAG: --reserved-memory="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627797 5113 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627801 5113 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627804 5113 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627808 5113 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627813 5113 flags.go:64] FLAG: --runonce="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627816 5113 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627820 5113 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627824 5113 flags.go:64] FLAG: --seccomp-default="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627827 5113 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627831 5113 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627834 5113 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627838 5113 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627842 5113 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627845 5113 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627849 5113 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627852 5113 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627856 5113 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627860 5113 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627863 5113 flags.go:64] FLAG: --system-cgroups="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627867 5113 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627873 5113 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627877 5113 flags.go:64] FLAG: --tls-cert-file="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627880 5113 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627884 5113 flags.go:64] FLAG: --tls-min-version="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627888 5113 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627891 5113 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627896 5113 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627899 5113 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627903 5113 flags.go:64] FLAG: --v="2" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627908 5113 flags.go:64] FLAG: --version="false" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627917 5113 flags.go:64] FLAG: --vmodule="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627922 5113 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.627926 5113 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628009 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628013 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628016 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628022 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628025 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628028 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628032 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628035 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628038 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628042 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628045 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628048 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628052 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628055 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628060 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628064 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628068 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628071 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628075 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628078 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628082 5113 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628085 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628089 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628092 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628096 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628099 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628103 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628106 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628109 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628114 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628118 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628121 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628124 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628127 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628130 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628135 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628138 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628141 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628144 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628148 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628151 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628154 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628157 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628160 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628163 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628167 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628170 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628174 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628178 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628181 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628184 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628188 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628199 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628202 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628206 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628209 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628212 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628215 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628219 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628222 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628225 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628230 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628233 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628237 5113 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628240 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628243 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628246 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628251 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628255 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628258 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628261 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628264 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628267 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628271 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628274 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628277 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628280 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628283 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628287 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628291 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628295 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628299 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628303 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628307 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628311 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.628315 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.628321 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.648986 5113 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.649042 5113 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649162 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649184 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649198 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649208 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649217 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649225 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649232 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649239 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649246 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649254 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649262 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649270 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649278 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649286 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649294 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649302 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649309 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649316 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649323 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649331 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649338 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649346 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649353 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649362 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649368 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649376 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649384 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649391 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649398 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649405 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649414 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649422 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649429 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649436 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649443 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649450 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649458 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649465 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649472 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649479 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649486 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649493 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649501 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649508 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649515 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649525 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649533 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649541 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649548 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649556 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649563 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649571 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649578 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649586 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649593 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649601 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649609 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649616 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649624 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649632 5113 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649640 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649647 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649655 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649665 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649673 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649681 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649688 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649695 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649703 5113 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649711 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649717 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649724 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649757 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649765 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649773 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649780 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649816 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649823 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649832 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649839 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649846 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649852 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649859 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649867 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649876 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.649885 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.649902 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650130 5113 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650143 5113 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650151 5113 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650159 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650166 5113 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650173 5113 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650181 5113 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650188 5113 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650195 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650204 5113 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650211 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650220 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650228 5113 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650235 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650242 5113 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650249 5113 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650257 5113 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650265 5113 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650272 5113 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650279 5113 feature_gate.go:328] unrecognized feature gate: Example2 Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650286 5113 feature_gate.go:328] unrecognized feature gate: Example Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650293 5113 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650315 5113 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650322 5113 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650329 5113 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650336 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650343 5113 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650351 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650358 5113 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650365 5113 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650374 5113 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650383 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650390 5113 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650397 5113 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650405 5113 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650412 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650419 5113 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650427 5113 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650433 5113 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650440 5113 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650448 5113 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650456 5113 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650463 5113 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650470 5113 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650477 5113 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650484 5113 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650491 5113 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650498 5113 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650506 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650513 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650520 5113 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650527 5113 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650534 5113 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650542 5113 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650549 5113 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650559 5113 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650566 5113 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650573 5113 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650581 5113 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650588 5113 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650597 5113 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650604 5113 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650611 5113 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650619 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650627 5113 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650634 5113 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650641 5113 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650648 5113 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650655 5113 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650664 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650672 5113 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650679 5113 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650687 5113 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650694 5113 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650702 5113 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650709 5113 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650716 5113 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650724 5113 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650760 5113 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650770 5113 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650779 5113 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650787 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650794 5113 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650802 5113 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650810 5113 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 21 09:17:50 crc kubenswrapper[5113]: W0121 09:17:50.650817 5113 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.650831 5113 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.651135 5113 server.go:962] "Client rotation is on, will bootstrap in background" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.656071 5113 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.660316 5113 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.660468 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.661366 5113 server.go:1019] "Starting client certificate rotation" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.661551 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.661651 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.669986 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.672216 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.672831 5113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.683931 5113 log.go:25] "Validated CRI v1 runtime API" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.716591 5113 log.go:25] "Validated CRI v1 image API" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.721226 5113 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.723724 5113 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-21-09-11-55-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.723788 5113 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.744020 5113 manager.go:217] Machine: {Timestamp:2026-01-21 09:17:50.742322889 +0000 UTC m=+0.243149978 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649917952 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:a84b16b3-46f1-4672-86d9-42da1a9b9cd6 BootID:814c5727-ea8c-4a4c-99fd-0eb8e7b766cd Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824958976 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824958976 Type:vfs Inodes:4107656 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107656 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:88:72:33 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:88:72:33 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a2:90:75 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:54:b5:80 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:55:1b:1c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:18:99:6a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:56:c8:7b:d4:05:68 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:96:ca:d8:87:d7 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649917952 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.744309 5113 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.744524 5113 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.745779 5113 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.745848 5113 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.746166 5113 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.746183 5113 container_manager_linux.go:306] "Creating device plugin manager" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.746216 5113 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.746495 5113 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747026 5113 state_mem.go:36] "Initialized new in-memory state store" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747212 5113 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747838 5113 kubelet.go:491] "Attempting to sync node with API server" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747882 5113 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747903 5113 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747918 5113 kubelet.go:397] "Adding apiserver pod source" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.747941 5113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.750503 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.750668 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.751617 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.751640 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.753135 5113 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.753156 5113 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.756208 5113 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.756518 5113 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.757416 5113 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758061 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758166 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758239 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758321 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758396 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758472 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758545 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758618 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758689 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758810 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.758911 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.759203 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.759523 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.759620 5113 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.760897 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.775378 5113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.775480 5113 server.go:1295] "Started kubelet" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.775867 5113 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.776050 5113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.776194 5113 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.776859 5113 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 09:17:50 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.778822 5113 server.go:317] "Adding debug handlers to kubelet server" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.779218 5113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.779253 5113 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.779770 5113 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.779801 5113 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.780155 5113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.780629 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.780664 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="200ms" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.780273 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cb45e4dd3ddb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.775418289 +0000 UTC m=+0.276245358,LastTimestamp:2026-01-21 09:17:50.775418289 +0000 UTC m=+0.276245358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.781227 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.783925 5113 factory.go:55] Registering systemd factory Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.784099 5113 factory.go:223] Registration of the systemd container factory successfully Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.784597 5113 factory.go:153] Registering CRI-O factory Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.784883 5113 factory.go:223] Registration of the crio container factory successfully Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.785018 5113 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.785071 5113 factory.go:103] Registering Raw factory Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.785095 5113 manager.go:1196] Started watching for new ooms in manager Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.786783 5113 manager.go:319] Starting recovery of all containers Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.827448 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828076 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828136 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828167 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828210 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828233 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828253 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828300 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828328 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828378 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828404 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828422 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828472 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828498 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828550 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828571 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828589 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828646 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828673 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828764 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828829 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828882 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828912 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828932 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828953 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.828979 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829033 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829050 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829072 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829091 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829108 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829126 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829169 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829186 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829203 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829219 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829265 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829282 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.829300 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.831554 5113 manager.go:324] Recovery completed Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832186 5113 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832246 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832273 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832289 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832310 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832328 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832343 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832358 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832375 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832390 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832405 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832419 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832433 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832446 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832459 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832474 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832489 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832503 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832535 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832555 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832568 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832593 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832608 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832622 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832635 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832651 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832668 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832681 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832693 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832722 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832751 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832763 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832789 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832803 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832814 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832827 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832839 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832853 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832864 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832876 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832886 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832898 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832910 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832922 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832937 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832949 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832961 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832973 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.832988 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833002 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833018 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833032 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833047 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833063 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833078 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833129 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833144 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833161 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833174 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833196 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833211 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833231 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833243 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833265 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833280 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833294 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833328 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833362 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833376 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833387 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833405 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833419 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833432 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833445 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833469 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833482 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.833473 5113 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/crc-pullsecret.service": inotify_add_watch /sys/fs/cgroup/system.slice/crc-pullsecret.service: no such file or directory Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833494 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833584 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833595 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833608 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833625 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833644 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833659 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833671 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833683 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833695 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833706 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833718 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833798 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833814 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833828 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833842 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833853 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833865 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833878 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833889 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833901 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833913 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833925 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833935 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833946 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833959 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833971 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833983 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.833995 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834006 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834017 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834030 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834040 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834052 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834064 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834076 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834086 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834096 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834107 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834118 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834129 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834139 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834150 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834161 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834171 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834182 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834195 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834205 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834216 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834226 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834237 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834248 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834261 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834272 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834285 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834297 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834308 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834322 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834334 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834347 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834359 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834370 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834388 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834399 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834411 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834421 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834431 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834442 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834455 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834465 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834481 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834492 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834503 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834517 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834528 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834539 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834550 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834561 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834574 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834585 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834596 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834607 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834619 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834631 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834644 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834654 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834665 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834676 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834688 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834702 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834715 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834728 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834756 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834771 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834784 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834798 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834809 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834819 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834831 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834841 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834853 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834864 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834874 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834886 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834897 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834908 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834919 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834931 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834943 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.834954 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835029 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835044 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835057 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835071 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835082 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835091 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835103 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835114 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835140 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835151 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835162 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835173 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835184 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835194 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835205 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835214 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835225 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835235 5113 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835244 5113 reconstruct.go:97] "Volume reconstruction finished" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.835250 5113 reconciler.go:26] "Reconciler: start to sync state" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.839065 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.841480 5113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.841558 5113 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.841789 5113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.841827 5113 kubelet.go:2451] "Starting kubelet main sync loop" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.842229 5113 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.846015 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.852917 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.854605 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.854643 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.854659 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.857625 5113 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.857646 5113 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.857665 5113 state_mem.go:36] "Initialized new in-memory state store" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.861813 5113 policy_none.go:49] "None policy: Start" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.861842 5113 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.861855 5113 state_mem.go:35] "Initializing new in-memory state store" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.881581 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.911325 5113 manager.go:341] "Starting Device Plugin manager" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.911800 5113 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.911832 5113 server.go:85] "Starting device plugin registration server" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.912650 5113 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.912698 5113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.913244 5113 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.913354 5113 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.913363 5113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.918039 5113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.918137 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.942532 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.942781 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.943769 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.943817 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.943834 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.944660 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.944939 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945014 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945707 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945766 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945785 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945807 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945837 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.945847 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.946662 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.946954 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.947039 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.947606 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.947641 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.947657 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948395 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948438 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948569 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948942 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.948981 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949210 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949287 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949308 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949706 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.949723 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.950460 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.950492 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.950536 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951030 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951066 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951095 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951111 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951069 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.951161 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.953184 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.953274 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.955705 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.955815 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:50 crc kubenswrapper[5113]: I0121 09:17:50.955847 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.981434 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="400ms" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.983035 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:50 crc kubenswrapper[5113]: E0121 09:17:50.991462 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.013213 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.014429 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.014487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.014502 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.014534 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.015219 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.019947 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.037649 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038226 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038329 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038369 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038406 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038442 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038613 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038664 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038718 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038829 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038853 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038872 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.038953 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039050 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039189 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039258 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039275 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039285 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039406 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039454 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039468 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039503 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039501 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039544 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039608 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039659 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.039704 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.040559 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.044232 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.140668 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.140857 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.140905 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.140942 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141002 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.140951 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141066 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141112 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141174 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141264 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141296 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141336 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141435 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141470 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141504 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141531 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.141560 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142066 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142129 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142173 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142219 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142273 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142312 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142366 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142331 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142416 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142431 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142476 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.142490 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.216352 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.217510 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.217563 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.217580 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.217616 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.218242 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.283925 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.292986 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: W0121 09:17:51.307656 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-ae1a94c13177466492aa5755d9cc5f63e6c3727ca49a2d8b09d128113971a13c WatchSource:0}: Error finding container ae1a94c13177466492aa5755d9cc5f63e6c3727ca49a2d8b09d128113971a13c: Status 404 returned error can't find the container with id ae1a94c13177466492aa5755d9cc5f63e6c3727ca49a2d8b09d128113971a13c Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.314143 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:17:51 crc kubenswrapper[5113]: W0121 09:17:51.316913 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-fb1e764b2546cf80c2aaefbe94e1baf4cdbcae6281f6f8557eae2d44e9904666 WatchSource:0}: Error finding container fb1e764b2546cf80c2aaefbe94e1baf4cdbcae6281f6f8557eae2d44e9904666: Status 404 returned error can't find the container with id fb1e764b2546cf80c2aaefbe94e1baf4cdbcae6281f6f8557eae2d44e9904666 Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.320160 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.339021 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: W0121 09:17:51.340295 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-5e844a7ed54d8e57abae957f927da3078ec125607e7537a7ee57b88663de20a8 WatchSource:0}: Error finding container 5e844a7ed54d8e57abae957f927da3078ec125607e7537a7ee57b88663de20a8: Status 404 returned error can't find the container with id 5e844a7ed54d8e57abae957f927da3078ec125607e7537a7ee57b88663de20a8 Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.345143 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:51 crc kubenswrapper[5113]: W0121 09:17:51.359400 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-fdedb104d95f404b5d42bd591ae1c2d449e139782a40608ce5e4afc7a6895127 WatchSource:0}: Error finding container fdedb104d95f404b5d42bd591ae1c2d449e139782a40608ce5e4afc7a6895127: Status 404 returned error can't find the container with id fdedb104d95f404b5d42bd591ae1c2d449e139782a40608ce5e4afc7a6895127 Jan 21 09:17:51 crc kubenswrapper[5113]: W0121 09:17:51.373534 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-0405ec52154b058f787b3ada891129509012898304708471932d0bd0522a2f93 WatchSource:0}: Error finding container 0405ec52154b058f787b3ada891129509012898304708471932d0bd0522a2f93: Status 404 returned error can't find the container with id 0405ec52154b058f787b3ada891129509012898304708471932d0bd0522a2f93 Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.382165 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="800ms" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.619127 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.620905 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.620966 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.620981 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.621012 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.621554 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.684465 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.761956 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.849828 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0405ec52154b058f787b3ada891129509012898304708471932d0bd0522a2f93"} Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.851711 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"fdedb104d95f404b5d42bd591ae1c2d449e139782a40608ce5e4afc7a6895127"} Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.854562 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"5e844a7ed54d8e57abae957f927da3078ec125607e7537a7ee57b88663de20a8"} Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.856704 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"fb1e764b2546cf80c2aaefbe94e1baf4cdbcae6281f6f8557eae2d44e9904666"} Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.859725 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8b462f33795ed36c96eb82d0605e3d0d75cda8a208712e5a08bbe1199b460457"} Jan 21 09:17:51 crc kubenswrapper[5113]: I0121 09:17:51.859795 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ae1a94c13177466492aa5755d9cc5f63e6c3727ca49a2d8b09d128113971a13c"} Jan 21 09:17:51 crc kubenswrapper[5113]: E0121 09:17:51.921460 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.119990 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.183006 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="1.6s" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.279599 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.421838 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.422800 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.422861 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.422879 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.422916 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.423412 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.181:6443: connect: connection refused" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.762100 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.181:6443: connect: connection refused Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.837319 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.838712 5113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.181:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.863417 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477" exitCode=0 Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.863494 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.863656 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.864168 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.864198 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.864209 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.864391 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.865549 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0" exitCode=0 Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.865634 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.865883 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.865952 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866370 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866392 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866920 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866939 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.866949 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.867110 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.869503 5113 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="3753f39d3e813e69237bc578baa42b6e2f7c1e1498ec995df75799be2050518e" exitCode=0 Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.869530 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"3753f39d3e813e69237bc578baa42b6e2f7c1e1498ec995df75799be2050518e"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.869640 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870372 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870423 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870442 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.870786 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870829 5113 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="b2848ce1074198a4b52b03d33de283d142451a392df5429e8ff195a46f6d0e86" exitCode=0 Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870883 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"b2848ce1074198a4b52b03d33de283d142451a392df5429e8ff195a46f6d0e86"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.870977 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.871699 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.871724 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.871763 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.871923 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.875342 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a4ec8a94add26d51ed633586f393082adc6e68c92b60b61a35848cc17f8f1b0c"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.875381 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"f961975c0ec900635a901f00482c245a386ecd3f3e7dca899cecb812133ce940"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.875394 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7c18a697677f607ab3265a3d04edbad68557370feb7ff27c2efe99d3180f75fc"} Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.875489 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.875970 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.876025 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:52 crc kubenswrapper[5113]: I0121 09:17:52.876041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:52 crc kubenswrapper[5113]: E0121 09:17:52.876304 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.887320 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f" exitCode=0 Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.887463 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.887599 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.888457 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.888485 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.888497 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:53 crc kubenswrapper[5113]: E0121 09:17:53.888709 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.897770 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"03413fd6528d11a5ca1743e7b6d3b467b83b8013e06d4e7f02da3a81f5a3c159"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.897840 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.898508 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.898544 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.898556 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:53 crc kubenswrapper[5113]: E0121 09:17:53.898810 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.900807 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"1a0c5d81cf6abf48ca564992e06379ca3f1d1890623591e6b86d4c79694e2f7b"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.900905 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"c0b4092be17370b0e45f24a8e79a48cd1549f4ef547228bd2995d316975bdd42"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.901001 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"153e83973dd71f7b23b37e6143b3c9de9d118112045570d63b00cdc939edc29a"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.900927 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.901757 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.901797 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.901811 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:53 crc kubenswrapper[5113]: E0121 09:17:53.901973 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.903779 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.903811 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.903824 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.903836 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1"} Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.903899 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.904325 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.904352 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:53 crc kubenswrapper[5113]: I0121 09:17:53.904363 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:53 crc kubenswrapper[5113]: E0121 09:17:53.904641 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.024445 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.025763 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.025818 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.025832 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.025862 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.563413 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.572458 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.914290 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e911156f4c12519d3df759a20d82c8a6464035eb8117c7edd3d085b5f83fe37a"} Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.914540 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.915943 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.916004 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.916031 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: E0121 09:17:54.916598 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.918865 5113 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e" exitCode=0 Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.919085 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e"} Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.919108 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.919147 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.919097 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.919718 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.920371 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.920539 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.920618 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.920645 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.920660 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: E0121 09:17:54.922596 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.923606 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.923702 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.923772 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.923817 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.923834 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.924423 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:54 crc kubenswrapper[5113]: E0121 09:17:54.924434 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.925831 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:54 crc kubenswrapper[5113]: E0121 09:17:54.925844 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:54 crc kubenswrapper[5113]: I0121 09:17:54.925872 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:54 crc kubenswrapper[5113]: E0121 09:17:54.926397 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926500 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b733dbfb86faee538fadad196e9f4133653e380f046a2a700481592da8080079"} Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926954 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"764cbe8f2707a2fadda1efee75054bf400af0119c406626759b367c3bd5b9b6f"} Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926978 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9c1502d2cb1898d8b79d4913673b05f8750b4eee5a387d50e0c69798a64c957b"} Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926560 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926641 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.927165 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.926631 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.927332 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.927929 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.927957 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.927970 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928085 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928128 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928138 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928150 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928163 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.928173 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:55 crc kubenswrapper[5113]: E0121 09:17:55.928355 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:55 crc kubenswrapper[5113]: E0121 09:17:55.928486 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:55 crc kubenswrapper[5113]: E0121 09:17:55.929130 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:55 crc kubenswrapper[5113]: I0121 09:17:55.989937 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.620609 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.776424 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.933408 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"721e8d92025d56350fd00228873a1f33f257d12ef3a712bc8a07ec9238a8a021"} Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.933489 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"7744c08a2337b8278ce3b654e963924c4a470d102b593634ab6da80cfc6ab5ef"} Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.933637 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.933681 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934134 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934540 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934590 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934619 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934621 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934820 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934844 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934925 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934961 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:56 crc kubenswrapper[5113]: I0121 09:17:56.934984 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:56 crc kubenswrapper[5113]: E0121 09:17:56.935377 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:56 crc kubenswrapper[5113]: E0121 09:17:56.935442 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:56 crc kubenswrapper[5113]: E0121 09:17:56.935843 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.002397 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.936619 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.936679 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937606 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937624 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937577 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937718 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:57 crc kubenswrapper[5113]: I0121 09:17:57.937836 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:57 crc kubenswrapper[5113]: E0121 09:17:57.938288 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:57 crc kubenswrapper[5113]: E0121 09:17:57.938808 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.271460 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.271651 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.272515 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.272548 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.272557 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:59 crc kubenswrapper[5113]: E0121 09:17:59.272827 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.356334 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.356812 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.358186 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.358315 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:17:59 crc kubenswrapper[5113]: I0121 09:17:59.358331 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:17:59 crc kubenswrapper[5113]: E0121 09:17:59.358840 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.707365 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.707679 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.708847 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.708950 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.708979 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:00 crc kubenswrapper[5113]: E0121 09:18:00.709642 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.893621 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.893961 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.895366 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.895416 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:00 crc kubenswrapper[5113]: I0121 09:18:00.895436 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:00 crc kubenswrapper[5113]: E0121 09:18:00.895960 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:00 crc kubenswrapper[5113]: E0121 09:18:00.918465 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.708281 5113 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.708384 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.763981 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 09:18:03 crc kubenswrapper[5113]: E0121 09:18:03.785240 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.864203 5113 trace.go:236] Trace[1088735734]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:17:53.863) (total time: 10000ms): Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[1088735734]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (09:18:03.864) Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[1088735734]: [10.000711781s] [10.000711781s] END Jan 21 09:18:03 crc kubenswrapper[5113]: E0121 09:18:03.864241 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.886651 5113 trace.go:236] Trace[44612689]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:17:53.884) (total time: 10001ms): Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[44612689]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:18:03.886) Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[44612689]: [10.001612986s] [10.001612986s] END Jan 21 09:18:03 crc kubenswrapper[5113]: E0121 09:18:03.886704 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.897567 5113 trace.go:236] Trace[1695398767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:17:53.895) (total time: 10001ms): Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[1695398767]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:18:03.897) Jan 21 09:18:03 crc kubenswrapper[5113]: Trace[1695398767]: [10.001915874s] [10.001915874s] END Jan 21 09:18:03 crc kubenswrapper[5113]: E0121 09:18:03.897622 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.975663 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.976135 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.977228 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.977355 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:03 crc kubenswrapper[5113]: I0121 09:18:03.977384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:03 crc kubenswrapper[5113]: E0121 09:18:03.978089 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:04 crc kubenswrapper[5113]: E0121 09:18:04.027195 5113 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 09:18:04 crc kubenswrapper[5113]: I0121 09:18:04.104682 5113 trace.go:236] Trace[1411419259]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 09:17:54.103) (total time: 10001ms): Jan 21 09:18:04 crc kubenswrapper[5113]: Trace[1411419259]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:18:04.104) Jan 21 09:18:04 crc kubenswrapper[5113]: Trace[1411419259]: [10.001177013s] [10.001177013s] END Jan 21 09:18:04 crc kubenswrapper[5113]: E0121 09:18:04.104782 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:18:05 crc kubenswrapper[5113]: I0121 09:18:05.345865 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:18:05 crc kubenswrapper[5113]: I0121 09:18:05.345992 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 09:18:05 crc kubenswrapper[5113]: I0121 09:18:05.358511 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:18:05 crc kubenswrapper[5113]: I0121 09:18:05.358576 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 09:18:06 crc kubenswrapper[5113]: E0121 09:18:06.987895 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="6.4s" Jan 21 09:18:07 crc kubenswrapper[5113]: I0121 09:18:07.227830 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:07 crc kubenswrapper[5113]: I0121 09:18:07.228911 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:07 crc kubenswrapper[5113]: I0121 09:18:07.228971 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:07 crc kubenswrapper[5113]: I0121 09:18:07.228990 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:07 crc kubenswrapper[5113]: I0121 09:18:07.229030 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:07 crc kubenswrapper[5113]: E0121 09:18:07.244186 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:07 crc kubenswrapper[5113]: E0121 09:18:07.674976 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:18:08 crc kubenswrapper[5113]: E0121 09:18:08.656601 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.365275 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.365574 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.366664 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.366728 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.366780 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:09 crc kubenswrapper[5113]: E0121 09:18:09.367350 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.373682 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:09 crc kubenswrapper[5113]: E0121 09:18:09.917036 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:18:09 crc kubenswrapper[5113]: E0121 09:18:09.917279 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.970354 5113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.970451 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.971453 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.971526 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:09 crc kubenswrapper[5113]: I0121 09:18:09.971545 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:09 crc kubenswrapper[5113]: E0121 09:18:09.972274 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.356652 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.358042 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.363230 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e4dd3ddb1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.775418289 +0000 UTC m=+0.276245358,LastTimestamp:2026-01-21 09:17:50.775418289 +0000 UTC m=+0.276245358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.369392 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.376648 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.383844 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.391789 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50902->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.391884 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50902->192.168.126.11:17697: read: connection reset by peer" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.391820 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35636->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.392022 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35636->192.168.126.11:17697: read: connection reset by peer" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.392614 5113 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.392646 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.393284 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e562b800c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.915379212 +0000 UTC m=+0.416206271,LastTimestamp:2026-01-21 09:17:50.915379212 +0000 UTC m=+0.416206271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.399599 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.943797414 +0000 UTC m=+0.444624473,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.408869 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.943826885 +0000 UTC m=+0.444653944,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.416601 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.943841556 +0000 UTC m=+0.444668615,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.422660 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.945729907 +0000 UTC m=+0.446556966,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.430955 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.945777238 +0000 UTC m=+0.446604297,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.436953 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.945794079 +0000 UTC m=+0.446621138,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.446725 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.945827979 +0000 UTC m=+0.446655028,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.451488 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.94584279 +0000 UTC m=+0.446669839,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.457620 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.9458514 +0000 UTC m=+0.446678439,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.464506 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.947630308 +0000 UTC m=+0.448457377,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.470179 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.947650449 +0000 UTC m=+0.448477518,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.475643 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.947664429 +0000 UTC m=+0.448491488,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.481610 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.94841872 +0000 UTC m=+0.449245779,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.487726 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.948445321 +0000 UTC m=+0.449272380,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.492513 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.948456681 +0000 UTC m=+0.449283740,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.498881 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.949237892 +0000 UTC m=+0.450064961,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.505035 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.949299094 +0000 UTC m=+0.450126153,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.509994 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528d12da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528d12da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854664922 +0000 UTC m=+0.355491981,LastTimestamp:2026-01-21 09:17:50.949314854 +0000 UTC m=+0.450141913,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.518051 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528c904d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528c904d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854631501 +0000 UTC m=+0.355458560,LastTimestamp:2026-01-21 09:17:50.949691854 +0000 UTC m=+0.450518913,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.531102 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188cb45e528cdaa4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188cb45e528cdaa4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:50.854650532 +0000 UTC m=+0.355477591,LastTimestamp:2026-01-21 09:17:50.949716235 +0000 UTC m=+0.450543304,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.537248 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45e6df52fd5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.314472917 +0000 UTC m=+0.815299966,LastTimestamp:2026-01-21 09:17:51.314472917 +0000 UTC m=+0.815299966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.543418 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45e6e38564d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.318873677 +0000 UTC m=+0.819700726,LastTimestamp:2026-01-21 09:17:51.318873677 +0000 UTC m=+0.819700726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.549184 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45e6fa5dfa6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.342829478 +0000 UTC m=+0.843656527,LastTimestamp:2026-01-21 09:17:51.342829478 +0000 UTC m=+0.843656527,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.553029 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45e70d26223 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.362523683 +0000 UTC m=+0.863350732,LastTimestamp:2026-01-21 09:17:51.362523683 +0000 UTC m=+0.863350732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.557496 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45e72091bee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.382887406 +0000 UTC m=+0.883714495,LastTimestamp:2026-01-21 09:17:51.382887406 +0000 UTC m=+0.883714495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.562552 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45e8d0fbcd2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.836306642 +0000 UTC m=+1.337133701,LastTimestamp:2026-01-21 09:17:51.836306642 +0000 UTC m=+1.337133701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.565118 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45e8d11cbe8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.836441576 +0000 UTC m=+1.337268625,LastTimestamp:2026-01-21 09:17:51.836441576 +0000 UTC m=+1.337268625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.567111 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45e8d15fd07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.836716295 +0000 UTC m=+1.337543354,LastTimestamp:2026-01-21 09:17:51.836716295 +0000 UTC m=+1.337543354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.570141 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45e8d4d3cb9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.840337081 +0000 UTC m=+1.341164130,LastTimestamp:2026-01-21 09:17:51.840337081 +0000 UTC m=+1.341164130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.574852 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45e8d8c3da3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.844466083 +0000 UTC m=+1.345293142,LastTimestamp:2026-01-21 09:17:51.844466083 +0000 UTC m=+1.345293142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.581677 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45e8dc4ce7e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.848173182 +0000 UTC m=+1.349000251,LastTimestamp:2026-01-21 09:17:51.848173182 +0000 UTC m=+1.349000251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.587101 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45e8dd3624b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.849128523 +0000 UTC m=+1.349955592,LastTimestamp:2026-01-21 09:17:51.849128523 +0000 UTC m=+1.349955592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.592124 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45e8e1626fd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.853504253 +0000 UTC m=+1.354331322,LastTimestamp:2026-01-21 09:17:51.853504253 +0000 UTC m=+1.354331322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.601048 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45e8e3a4435 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.855871029 +0000 UTC m=+1.356698078,LastTimestamp:2026-01-21 09:17:51.855871029 +0000 UTC m=+1.356698078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.607259 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45e8ebe9eef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.864545007 +0000 UTC m=+1.365372056,LastTimestamp:2026-01-21 09:17:51.864545007 +0000 UTC m=+1.365372056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.612219 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45e8ec58209 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:51.864996361 +0000 UTC m=+1.365823420,LastTimestamp:2026-01-21 09:17:51.864996361 +0000 UTC m=+1.365823420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.616184 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45ea000f088 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.154103944 +0000 UTC m=+1.654931023,LastTimestamp:2026-01-21 09:17:52.154103944 +0000 UTC m=+1.654931023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.620780 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45ea0a5cd20 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.16490832 +0000 UTC m=+1.665735409,LastTimestamp:2026-01-21 09:17:52.16490832 +0000 UTC m=+1.665735409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.626994 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45ea0bc8108 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.166396168 +0000 UTC m=+1.667223257,LastTimestamp:2026-01-21 09:17:52.166396168 +0000 UTC m=+1.667223257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.631653 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45eb98dd831 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.582768689 +0000 UTC m=+2.083595768,LastTimestamp:2026-01-21 09:17:52.582768689 +0000 UTC m=+2.083595768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.638339 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45eba4c5207 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.595251719 +0000 UTC m=+2.096078768,LastTimestamp:2026-01-21 09:17:52.595251719 +0000 UTC m=+2.096078768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.645142 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45eba5ed777 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.596465527 +0000 UTC m=+2.097292576,LastTimestamp:2026-01-21 09:17:52.596465527 +0000 UTC m=+2.097292576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.649081 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45ec7e7e60f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.823551503 +0000 UTC m=+2.324378562,LastTimestamp:2026-01-21 09:17:52.823551503 +0000 UTC m=+2.324378562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.654050 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb45ec8c316e0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.837916384 +0000 UTC m=+2.338743483,LastTimestamp:2026-01-21 09:17:52.837916384 +0000 UTC m=+2.338743483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.657702 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45eca67a027 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.865476647 +0000 UTC m=+2.366303706,LastTimestamp:2026-01-21 09:17:52.865476647 +0000 UTC m=+2.366303706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.661261 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45eca8f27e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.8680673 +0000 UTC m=+2.368894359,LastTimestamp:2026-01-21 09:17:52.8680673 +0000 UTC m=+2.368894359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.665018 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45ecaef6d86 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.874376582 +0000 UTC m=+2.375203641,LastTimestamp:2026-01-21 09:17:52.874376582 +0000 UTC m=+2.375203641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.666527 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45ecaff9b4c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:52.875436876 +0000 UTC m=+2.376263965,LastTimestamp:2026-01-21 09:17:52.875436876 +0000 UTC m=+2.376263965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.669479 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45edb6cc2e7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.151025895 +0000 UTC m=+2.651852974,LastTimestamp:2026-01-21 09:17:53.151025895 +0000 UTC m=+2.651852974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.673417 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45edb6fb88b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.151219851 +0000 UTC m=+2.652046910,LastTimestamp:2026-01-21 09:17:53.151219851 +0000 UTC m=+2.652046910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.678129 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45edb742956 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.15151087 +0000 UTC m=+2.652337939,LastTimestamp:2026-01-21 09:17:53.15151087 +0000 UTC m=+2.652337939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.682046 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45edb763615 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.151645205 +0000 UTC m=+2.652472294,LastTimestamp:2026-01-21 09:17:53.151645205 +0000 UTC m=+2.652472294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.685629 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45edc4b86d2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.165625042 +0000 UTC m=+2.666452121,LastTimestamp:2026-01-21 09:17:53.165625042 +0000 UTC m=+2.666452121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.690241 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45edc6561bf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.167319487 +0000 UTC m=+2.668146576,LastTimestamp:2026-01-21 09:17:53.167319487 +0000 UTC m=+2.668146576,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.694064 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45edea1a49e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.204823198 +0000 UTC m=+2.705650287,LastTimestamp:2026-01-21 09:17:53.204823198 +0000 UTC m=+2.705650287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.698037 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45edeb83aae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.206303406 +0000 UTC m=+2.707130465,LastTimestamp:2026-01-21 09:17:53.206303406 +0000 UTC m=+2.707130465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.701698 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188cb45edee727a4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.209378724 +0000 UTC m=+2.710205783,LastTimestamp:2026-01-21 09:17:53.209378724 +0000 UTC m=+2.710205783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.705643 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45edf1012aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.21206033 +0000 UTC m=+2.712887419,LastTimestamp:2026-01-21 09:17:53.21206033 +0000 UTC m=+2.712887419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.709462 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45eeaa1960b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.406146059 +0000 UTC m=+2.906973128,LastTimestamp:2026-01-21 09:17:53.406146059 +0000 UTC m=+2.906973128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.711819 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.711968 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.713318 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.713371 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.713384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.713445 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45eeab5ee6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.407479402 +0000 UTC m=+2.908306471,LastTimestamp:2026-01-21 09:17:53.407479402 +0000 UTC m=+2.908306471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.713843 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.714314 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.717445 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45eeb2ca635 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.415259701 +0000 UTC m=+2.916086770,LastTimestamp:2026-01-21 09:17:53.415259701 +0000 UTC m=+2.916086770,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.721199 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45eeb385907 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.416026375 +0000 UTC m=+2.916853424,LastTimestamp:2026-01-21 09:17:53.416026375 +0000 UTC m=+2.916853424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.725843 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45eeb3abf4e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.41618363 +0000 UTC m=+2.917010699,LastTimestamp:2026-01-21 09:17:53.41618363 +0000 UTC m=+2.917010699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.731876 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45eeb48db92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.41710837 +0000 UTC m=+2.917935439,LastTimestamp:2026-01-21 09:17:53.41710837 +0000 UTC m=+2.917935439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.735664 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45ef8b203b1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.642103729 +0000 UTC m=+3.142930778,LastTimestamp:2026-01-21 09:17:53.642103729 +0000 UTC m=+3.142930778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.740011 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188cb45ef95fe058 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.653497944 +0000 UTC m=+3.154324993,LastTimestamp:2026-01-21 09:17:53.653497944 +0000 UTC m=+3.154324993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.744216 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45ef9aedfce openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.65867515 +0000 UTC m=+3.159502199,LastTimestamp:2026-01-21 09:17:53.65867515 +0000 UTC m=+3.159502199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.748228 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45efa77f9ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.671854522 +0000 UTC m=+3.172681571,LastTimestamp:2026-01-21 09:17:53.671854522 +0000 UTC m=+3.172681571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.752561 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45efa883f0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.672920846 +0000 UTC m=+3.173747895,LastTimestamp:2026-01-21 09:17:53.672920846 +0000 UTC m=+3.173747895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.756070 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f0607b0f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.865822453 +0000 UTC m=+3.366649502,LastTimestamp:2026-01-21 09:17:53.865822453 +0000 UTC m=+3.366649502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.759941 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f0682dd7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.873894779 +0000 UTC m=+3.374721828,LastTimestamp:2026-01-21 09:17:53.873894779 +0000 UTC m=+3.374721828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.765037 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.765074 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f06932e33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.874964019 +0000 UTC m=+3.375791068,LastTimestamp:2026-01-21 09:17:53.874964019 +0000 UTC m=+3.375791068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.766070 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f07dc76ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.896543915 +0000 UTC m=+3.397370964,LastTimestamp:2026-01-21 09:17:53.896543915 +0000 UTC m=+3.397370964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.769871 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f12960a85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.076478085 +0000 UTC m=+3.577305144,LastTimestamp:2026-01-21 09:17:54.076478085 +0000 UTC m=+3.577305144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.774105 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f13148e1c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.084769308 +0000 UTC m=+3.585596377,LastTimestamp:2026-01-21 09:17:54.084769308 +0000 UTC m=+3.585596377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.777588 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f143d5f40 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.104221504 +0000 UTC m=+3.605048583,LastTimestamp:2026-01-21 09:17:54.104221504 +0000 UTC m=+3.605048583,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.781765 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f14dea8bb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.114791611 +0000 UTC m=+3.615618670,LastTimestamp:2026-01-21 09:17:54.114791611 +0000 UTC m=+3.615618670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.786253 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f455a6bd4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.928208852 +0000 UTC m=+4.429035931,LastTimestamp:2026-01-21 09:17:54.928208852 +0000 UTC m=+4.429035931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.790578 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f53c7a62c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.170248236 +0000 UTC m=+4.671075325,LastTimestamp:2026-01-21 09:17:55.170248236 +0000 UTC m=+4.671075325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.793968 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f54ac7cdf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.185245407 +0000 UTC m=+4.686072486,LastTimestamp:2026-01-21 09:17:55.185245407 +0000 UTC m=+4.686072486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.797338 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f54c209dc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.186657756 +0000 UTC m=+4.687484845,LastTimestamp:2026-01-21 09:17:55.186657756 +0000 UTC m=+4.687484845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.800552 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f63f50344 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.441656644 +0000 UTC m=+4.942483733,LastTimestamp:2026-01-21 09:17:55.441656644 +0000 UTC m=+4.942483733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.804751 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f65641585 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.465713029 +0000 UTC m=+4.966540118,LastTimestamp:2026-01-21 09:17:55.465713029 +0000 UTC m=+4.966540118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.809316 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f65795f48 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.467108168 +0000 UTC m=+4.967935257,LastTimestamp:2026-01-21 09:17:55.467108168 +0000 UTC m=+4.967935257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.814009 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f74574acb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.716532939 +0000 UTC m=+5.217359998,LastTimestamp:2026-01-21 09:17:55.716532939 +0000 UTC m=+5.217359998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.818987 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f755faa63 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.733858915 +0000 UTC m=+5.234686004,LastTimestamp:2026-01-21 09:17:55.733858915 +0000 UTC m=+5.234686004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.822863 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f756f11b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.734868403 +0000 UTC m=+5.235695482,LastTimestamp:2026-01-21 09:17:55.734868403 +0000 UTC m=+5.235695482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.827066 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f830fd661 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.963508321 +0000 UTC m=+5.464335370,LastTimestamp:2026-01-21 09:17:55.963508321 +0000 UTC m=+5.464335370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.834355 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f840d57f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.980122097 +0000 UTC m=+5.480949136,LastTimestamp:2026-01-21 09:17:55.980122097 +0000 UTC m=+5.480949136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.839520 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f841ef888 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:55.98127732 +0000 UTC m=+5.482104409,LastTimestamp:2026-01-21 09:17:55.98127732 +0000 UTC m=+5.482104409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.845367 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f940d735c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:56.248564572 +0000 UTC m=+5.749391631,LastTimestamp:2026-01-21 09:17:56.248564572 +0000 UTC m=+5.749391631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.851520 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188cb45f95185a51 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:56.266056273 +0000 UTC m=+5.766883332,LastTimestamp:2026-01-21 09:17:56.266056273 +0000 UTC m=+5.766883332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.859817 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-controller-manager-crc.188cb46150b0bace openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 21 09:18:10 crc kubenswrapper[5113]: body: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:03.708349134 +0000 UTC m=+13.209176183,LastTimestamp:2026-01-21 09:18:03.708349134 +0000 UTC m=+13.209176183,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.866933 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188cb46150b233b1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:03.708445617 +0000 UTC m=+13.209272676,LastTimestamp:2026-01-21 09:18:03.708445617 +0000 UTC m=+13.209272676,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.871785 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.188cb461b24c7466 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 09:18:10 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:18:10 crc kubenswrapper[5113]: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:05.345944678 +0000 UTC m=+14.846771777,LastTimestamp:2026-01-21 09:18:05.345944678 +0000 UTC m=+14.846771777,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.873959 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb461b24dd074 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:05.34603378 +0000 UTC m=+14.846860869,LastTimestamp:2026-01-21 09:18:05.34603378 +0000 UTC m=+14.846860869,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.878489 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb461b24c7466\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.188cb461b24c7466 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 21 09:18:10 crc kubenswrapper[5113]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 09:18:10 crc kubenswrapper[5113]: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:05.345944678 +0000 UTC m=+14.846771777,LastTimestamp:2026-01-21 09:18:05.358554305 +0000 UTC m=+14.859381354,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.879639 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb461b24dd074\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb461b24dd074 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:05.34603378 +0000 UTC m=+14.846860869,LastTimestamp:2026-01-21 09:18:05.358595776 +0000 UTC m=+14.859422825,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.887873 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.188cb462df0ede37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:50902->192.168.126.11:17697: read: connection reset by peer Jan 21 09:18:10 crc kubenswrapper[5113]: body: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.391850551 +0000 UTC m=+19.892677610,LastTimestamp:2026-01-21 09:18:10.391850551 +0000 UTC m=+19.892677610,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.894243 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb462df0fbe7a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50902->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.391907962 +0000 UTC m=+19.892735031,LastTimestamp:2026-01-21 09:18:10.391907962 +0000 UTC m=+19.892735031,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.899805 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.188cb462df10b4d0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:35636->192.168.126.11:17697: read: connection reset by peer Jan 21 09:18:10 crc kubenswrapper[5113]: body: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.391971024 +0000 UTC m=+19.892798083,LastTimestamp:2026-01-21 09:18:10.391971024 +0000 UTC m=+19.892798083,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.909090 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb462df120bca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35636->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.392058826 +0000 UTC m=+19.892885885,LastTimestamp:2026-01-21 09:18:10.392058826 +0000 UTC m=+19.892885885,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.914644 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 21 09:18:10 crc kubenswrapper[5113]: &Event{ObjectMeta:{kube-apiserver-crc.188cb462df1ad49a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 21 09:18:10 crc kubenswrapper[5113]: body: Jan 21 09:18:10 crc kubenswrapper[5113]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.392634522 +0000 UTC m=+19.893461581,LastTimestamp:2026-01-21 09:18:10.392634522 +0000 UTC m=+19.893461581,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 21 09:18:10 crc kubenswrapper[5113]: > Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.918891 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.921575 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb462df1b41dd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:10.392662493 +0000 UTC m=+19.893489552,LastTimestamp:2026-01-21 09:18:10.392662493 +0000 UTC m=+19.893489552,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.975455 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.977580 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e911156f4c12519d3df759a20d82c8a6464035eb8117c7edd3d085b5f83fe37a" exitCode=255 Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.977713 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e911156f4c12519d3df759a20d82c8a6464035eb8117c7edd3d085b5f83fe37a"} Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.977882 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.978312 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.978421 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.978469 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.978490 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.979101 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.979580 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.979612 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.979624 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.980006 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.980261 5113 scope.go:117] "RemoveContainer" containerID="e911156f4c12519d3df759a20d82c8a6464035eb8117c7edd3d085b5f83fe37a" Jan 21 09:18:10 crc kubenswrapper[5113]: I0121 09:18:10.986385 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:18:10 crc kubenswrapper[5113]: E0121 09:18:10.989285 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f06932e33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f06932e33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.874964019 +0000 UTC m=+3.375791068,LastTimestamp:2026-01-21 09:18:10.981808577 +0000 UTC m=+20.482635646,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:11 crc kubenswrapper[5113]: E0121 09:18:11.209971 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f12960a85\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f12960a85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.076478085 +0000 UTC m=+3.577305144,LastTimestamp:2026-01-21 09:18:11.202805604 +0000 UTC m=+20.703632683,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:11 crc kubenswrapper[5113]: E0121 09:18:11.229905 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f143d5f40\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f143d5f40 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.104221504 +0000 UTC m=+3.605048583,LastTimestamp:2026-01-21 09:18:11.219896341 +0000 UTC m=+20.720723430,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.768394 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.981766 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.983352 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc"} Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.983424 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.983584 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984100 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984133 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984134 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984145 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:11 crc kubenswrapper[5113]: I0121 09:18:11.984235 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:11 crc kubenswrapper[5113]: E0121 09:18:11.984525 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:11 crc kubenswrapper[5113]: E0121 09:18:11.984729 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.766629 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.987114 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.987713 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.989429 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" exitCode=255 Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.989520 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc"} Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.989564 5113 scope.go:117] "RemoveContainer" containerID="e911156f4c12519d3df759a20d82c8a6464035eb8117c7edd3d085b5f83fe37a" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.991167 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.994985 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.995032 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.995046 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:12 crc kubenswrapper[5113]: E0121 09:18:12.995491 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:12 crc kubenswrapper[5113]: I0121 09:18:12.995781 5113 scope.go:117] "RemoveContainer" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" Jan 21 09:18:12 crc kubenswrapper[5113]: E0121 09:18:12.996002 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:13 crc kubenswrapper[5113]: E0121 09:18:13.002100 5113 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:13 crc kubenswrapper[5113]: E0121 09:18:13.393657 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.644955 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.646217 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.646282 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.646308 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.646351 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:13 crc kubenswrapper[5113]: E0121 09:18:13.663559 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.768364 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:13 crc kubenswrapper[5113]: I0121 09:18:13.993876 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.017478 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.017827 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.018826 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.018947 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.018973 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:14 crc kubenswrapper[5113]: E0121 09:18:14.019620 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.039258 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.769486 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:14 crc kubenswrapper[5113]: I0121 09:18:14.999637 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.000512 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.000591 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.000619 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:15 crc kubenswrapper[5113]: E0121 09:18:15.001595 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.669507 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.670221 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.671611 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.671672 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.671691 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:15 crc kubenswrapper[5113]: E0121 09:18:15.672366 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.672916 5113 scope.go:117] "RemoveContainer" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" Jan 21 09:18:15 crc kubenswrapper[5113]: E0121 09:18:15.673300 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:15 crc kubenswrapper[5113]: E0121 09:18:15.681308 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb4637a467964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:15.67324287 +0000 UTC m=+25.174069959,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:15 crc kubenswrapper[5113]: I0121 09:18:15.767122 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:16 crc kubenswrapper[5113]: E0121 09:18:16.173481 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:18:16 crc kubenswrapper[5113]: I0121 09:18:16.770137 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:17 crc kubenswrapper[5113]: E0121 09:18:17.556594 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:18:17 crc kubenswrapper[5113]: I0121 09:18:17.770012 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:18 crc kubenswrapper[5113]: I0121 09:18:18.769714 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:19 crc kubenswrapper[5113]: E0121 09:18:19.132659 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:18:19 crc kubenswrapper[5113]: I0121 09:18:19.770148 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:20 crc kubenswrapper[5113]: E0121 09:18:20.402240 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.664429 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.665625 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.665697 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.665716 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.665790 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:20 crc kubenswrapper[5113]: E0121 09:18:20.681069 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:20 crc kubenswrapper[5113]: I0121 09:18:20.767140 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:20 crc kubenswrapper[5113]: E0121 09:18:20.919130 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.769599 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:21 crc kubenswrapper[5113]: E0121 09:18:21.983180 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.984432 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.984771 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.985780 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.985841 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.985863 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:21 crc kubenswrapper[5113]: E0121 09:18:21.986453 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:21 crc kubenswrapper[5113]: I0121 09:18:21.986946 5113 scope.go:117] "RemoveContainer" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" Jan 21 09:18:21 crc kubenswrapper[5113]: E0121 09:18:21.987281 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:21 crc kubenswrapper[5113]: E0121 09:18:21.995858 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb4637a467964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:21.987227548 +0000 UTC m=+31.488054627,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:22 crc kubenswrapper[5113]: I0121 09:18:22.769165 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:23 crc kubenswrapper[5113]: I0121 09:18:23.767561 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:24 crc kubenswrapper[5113]: I0121 09:18:24.769492 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:25 crc kubenswrapper[5113]: I0121 09:18:25.769706 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:26 crc kubenswrapper[5113]: I0121 09:18:26.768280 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:27 crc kubenswrapper[5113]: E0121 09:18:27.408467 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.681407 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.682647 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.682717 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.682774 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.682815 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:27 crc kubenswrapper[5113]: E0121 09:18:27.695794 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:27 crc kubenswrapper[5113]: I0121 09:18:27.769835 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:28 crc kubenswrapper[5113]: I0121 09:18:28.769708 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:29 crc kubenswrapper[5113]: I0121 09:18:29.770722 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:30 crc kubenswrapper[5113]: I0121 09:18:30.770498 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:30 crc kubenswrapper[5113]: E0121 09:18:30.919355 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:31 crc kubenswrapper[5113]: I0121 09:18:31.769036 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:32 crc kubenswrapper[5113]: E0121 09:18:32.258114 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 21 09:18:32 crc kubenswrapper[5113]: I0121 09:18:32.769436 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:32 crc kubenswrapper[5113]: E0121 09:18:32.769804 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.769234 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.843645 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.844848 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.844901 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.844915 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:33 crc kubenswrapper[5113]: E0121 09:18:33.845299 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:33 crc kubenswrapper[5113]: I0121 09:18:33.845600 5113 scope.go:117] "RemoveContainer" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" Jan 21 09:18:33 crc kubenswrapper[5113]: E0121 09:18:33.853141 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f06932e33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f06932e33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:53.874964019 +0000 UTC m=+3.375791068,LastTimestamp:2026-01-21 09:18:33.846844458 +0000 UTC m=+43.347671507,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:34 crc kubenswrapper[5113]: E0121 09:18:34.022346 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f12960a85\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f12960a85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.076478085 +0000 UTC m=+3.577305144,LastTimestamp:2026-01-21 09:18:34.016197803 +0000 UTC m=+43.517024852,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:34 crc kubenswrapper[5113]: E0121 09:18:34.037441 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb45f143d5f40\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb45f143d5f40 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:17:54.104221504 +0000 UTC m=+3.605048583,LastTimestamp:2026-01-21 09:18:34.031205985 +0000 UTC m=+43.532033034,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.322118 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.324379 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a"} Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.324843 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.325621 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.325654 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.325664 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:34 crc kubenswrapper[5113]: E0121 09:18:34.325960 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:34 crc kubenswrapper[5113]: E0121 09:18:34.417398 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.696780 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.698303 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.698373 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.698444 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.698551 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:34 crc kubenswrapper[5113]: E0121 09:18:34.713822 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:34 crc kubenswrapper[5113]: I0121 09:18:34.771168 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.330182 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.331394 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.334323 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" exitCode=255 Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.334419 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a"} Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.334510 5113 scope.go:117] "RemoveContainer" containerID="02d2ddac293edc8526615156e4cee514225528335b3eb8f6ec743567b824dbfc" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.334924 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.335824 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.335880 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.335899 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:35 crc kubenswrapper[5113]: E0121 09:18:35.336388 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.336852 5113 scope.go:117] "RemoveContainer" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" Jan 21 09:18:35 crc kubenswrapper[5113]: E0121 09:18:35.337182 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:35 crc kubenswrapper[5113]: E0121 09:18:35.343967 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb4637a467964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:35.337132401 +0000 UTC m=+44.837959490,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.669520 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:35 crc kubenswrapper[5113]: I0121 09:18:35.769393 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.339998 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.344022 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.345174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.345393 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.345546 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:36 crc kubenswrapper[5113]: E0121 09:18:36.346573 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.347797 5113 scope.go:117] "RemoveContainer" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" Jan 21 09:18:36 crc kubenswrapper[5113]: E0121 09:18:36.348618 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:36 crc kubenswrapper[5113]: E0121 09:18:36.356610 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb4637a467964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:36.348526181 +0000 UTC m=+45.849353270,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:36 crc kubenswrapper[5113]: I0121 09:18:36.769448 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:37 crc kubenswrapper[5113]: I0121 09:18:37.771909 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:38 crc kubenswrapper[5113]: I0121 09:18:38.768127 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:39 crc kubenswrapper[5113]: I0121 09:18:39.770495 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:39 crc kubenswrapper[5113]: E0121 09:18:39.840221 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 21 09:18:40 crc kubenswrapper[5113]: E0121 09:18:40.370046 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 21 09:18:40 crc kubenswrapper[5113]: I0121 09:18:40.768904 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:40 crc kubenswrapper[5113]: E0121 09:18:40.919641 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:41 crc kubenswrapper[5113]: E0121 09:18:41.423916 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.714552 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.715970 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.716015 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.716033 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.716088 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:41 crc kubenswrapper[5113]: E0121 09:18:41.732840 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:41 crc kubenswrapper[5113]: I0121 09:18:41.769157 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:42 crc kubenswrapper[5113]: I0121 09:18:42.767910 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:43 crc kubenswrapper[5113]: I0121 09:18:43.769272 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.325807 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.326151 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.327239 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.327313 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.327333 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:44 crc kubenswrapper[5113]: E0121 09:18:44.328000 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.328432 5113 scope.go:117] "RemoveContainer" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" Jan 21 09:18:44 crc kubenswrapper[5113]: E0121 09:18:44.328809 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:18:44 crc kubenswrapper[5113]: E0121 09:18:44.336662 5113 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188cb4637a467964\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188cb4637a467964 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:18:12.995963236 +0000 UTC m=+22.496790285,LastTimestamp:2026-01-21 09:18:44.3287238 +0000 UTC m=+53.829550879,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:18:44 crc kubenswrapper[5113]: I0121 09:18:44.770250 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.769585 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.934169 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.934615 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.936006 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.936067 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:45 crc kubenswrapper[5113]: I0121 09:18:45.936088 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:45 crc kubenswrapper[5113]: E0121 09:18:45.936723 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:46 crc kubenswrapper[5113]: I0121 09:18:46.769815 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:47 crc kubenswrapper[5113]: I0121 09:18:47.770322 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:48 crc kubenswrapper[5113]: E0121 09:18:48.432971 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.733594 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.734914 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.734985 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.735004 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.735033 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:48 crc kubenswrapper[5113]: E0121 09:18:48.750446 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:48 crc kubenswrapper[5113]: I0121 09:18:48.769343 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:49 crc kubenswrapper[5113]: I0121 09:18:49.769206 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:50 crc kubenswrapper[5113]: I0121 09:18:50.772132 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:50 crc kubenswrapper[5113]: E0121 09:18:50.921149 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:18:51 crc kubenswrapper[5113]: I0121 09:18:51.767507 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:52 crc kubenswrapper[5113]: I0121 09:18:52.769609 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:53 crc kubenswrapper[5113]: I0121 09:18:53.768656 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:54 crc kubenswrapper[5113]: I0121 09:18:54.767507 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:55 crc kubenswrapper[5113]: E0121 09:18:55.441627 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.751634 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.752760 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.752804 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.752815 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.752839 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.767267 5113 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 21 09:18:55 crc kubenswrapper[5113]: E0121 09:18:55.767386 5113 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.922819 5113 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-s8srs" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.928970 5113 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-s8srs" Jan 21 09:18:55 crc kubenswrapper[5113]: I0121 09:18:55.935462 5113 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 09:18:56 crc kubenswrapper[5113]: I0121 09:18:56.661319 5113 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 09:18:56 crc kubenswrapper[5113]: I0121 09:18:56.937002 5113 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-20 09:13:55 +0000 UTC" deadline="2026-02-13 00:41:19.498401423 +0000 UTC" Jan 21 09:18:56 crc kubenswrapper[5113]: I0121 09:18:56.937145 5113 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="543h22m22.561263622s" Jan 21 09:18:57 crc kubenswrapper[5113]: I0121 09:18:57.842601 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:57 crc kubenswrapper[5113]: I0121 09:18:57.844096 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:57 crc kubenswrapper[5113]: I0121 09:18:57.844146 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:57 crc kubenswrapper[5113]: I0121 09:18:57.844171 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:57 crc kubenswrapper[5113]: E0121 09:18:57.844857 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:18:57 crc kubenswrapper[5113]: I0121 09:18:57.845225 5113 scope.go:117] "RemoveContainer" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.405453 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.407397 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6"} Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.407707 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.408620 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.408661 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:18:58 crc kubenswrapper[5113]: I0121 09:18:58.408678 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:18:58 crc kubenswrapper[5113]: E0121 09:18:58.409251 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.412363 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.413338 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.415012 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" exitCode=255 Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.415070 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6"} Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.415130 5113 scope.go:117] "RemoveContainer" containerID="8c298b5e66a9a3b54435548213ff06a2696202b276c5da244dccfccd06da8e0a" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.415324 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.416025 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.416054 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.416065 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:00 crc kubenswrapper[5113]: E0121 09:19:00.416430 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:00 crc kubenswrapper[5113]: I0121 09:19:00.416647 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:00 crc kubenswrapper[5113]: E0121 09:19:00.416831 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:00 crc kubenswrapper[5113]: E0121 09:19:00.922033 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:19:01 crc kubenswrapper[5113]: I0121 09:19:01.418926 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.768176 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.769242 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.769281 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.769294 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.769395 5113 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.777170 5113 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.777389 5113 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.777405 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.780041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.780070 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.780080 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.780095 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.780104 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:02Z","lastTransitionTime":"2026-01-21T09:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.792457 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.800385 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.800432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.800448 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.800467 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.800482 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:02Z","lastTransitionTime":"2026-01-21T09:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.810820 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.818161 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.818205 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.818215 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.818240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.818251 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:02Z","lastTransitionTime":"2026-01-21T09:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.829291 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.838668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.838703 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.838712 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.838728 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:02 crc kubenswrapper[5113]: I0121 09:19:02.838757 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:02Z","lastTransitionTime":"2026-01-21T09:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.851655 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.851802 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.851826 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:02 crc kubenswrapper[5113]: E0121 09:19:02.952694 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.053769 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.154835 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.255861 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: I0121 09:19:03.307930 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.356254 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.456991 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.557727 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: I0121 09:19:03.563001 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.658391 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.759444 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.860201 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:03 crc kubenswrapper[5113]: E0121 09:19:03.961169 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.061367 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.161585 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.262798 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.363208 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.463533 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.563643 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.664538 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.765448 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.866393 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:04 crc kubenswrapper[5113]: E0121 09:19:04.967492 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.068516 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.169487 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.270614 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.371088 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.472171 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.572840 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.669535 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.670005 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.671270 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.671325 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.671343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.672158 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:05 crc kubenswrapper[5113]: I0121 09:19:05.672604 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.673039 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.673132 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.773531 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.873989 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:05 crc kubenswrapper[5113]: E0121 09:19:05.974316 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.074850 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.175817 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.276542 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.377707 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.478200 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.578631 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.679526 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.780255 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.880616 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:06 crc kubenswrapper[5113]: E0121 09:19:06.990454 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.090929 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.191306 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.292317 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.392497 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.492942 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.593149 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.693800 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.794444 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.895349 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:07 crc kubenswrapper[5113]: E0121 09:19:07.996108 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.097112 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.197501 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.298222 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.399273 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.408632 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.408993 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.410315 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.410372 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.410392 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.411043 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:08 crc kubenswrapper[5113]: I0121 09:19:08.411433 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.411779 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.499700 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.600131 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.700697 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.801574 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:08 crc kubenswrapper[5113]: E0121 09:19:08.902202 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.002548 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.102793 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.203840 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.304710 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.405678 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.506215 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.607302 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.707995 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.808378 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:09 crc kubenswrapper[5113]: E0121 09:19:09.909350 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.010415 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.110897 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.211201 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.312088 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.412931 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.513471 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.614392 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.714521 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.815498 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.916376 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:10 crc kubenswrapper[5113]: E0121 09:19:10.922912 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.017201 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.117789 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.218856 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.319273 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.420057 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.521042 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.621604 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.722349 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.823242 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:11 crc kubenswrapper[5113]: E0121 09:19:11.923660 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.024442 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.125108 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.226214 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.326793 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.426973 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.527725 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.628180 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.729062 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.830321 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:12 crc kubenswrapper[5113]: E0121 09:19:12.931079 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.031795 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.132835 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.208324 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.212956 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.213016 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.213033 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.213057 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.213075 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:13Z","lastTransitionTime":"2026-01-21T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.227845 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.234705 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.234811 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.234833 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.234862 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.234883 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:13Z","lastTransitionTime":"2026-01-21T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.248118 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.253110 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.253194 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.253222 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.253252 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.253278 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:13Z","lastTransitionTime":"2026-01-21T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.271091 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.275593 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.275672 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.275699 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.275766 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:13 crc kubenswrapper[5113]: I0121 09:19:13.275797 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:13Z","lastTransitionTime":"2026-01-21T09:19:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.287372 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.287601 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.287625 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.387927 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.488071 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.588383 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.689123 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.789443 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.890488 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:13 crc kubenswrapper[5113]: E0121 09:19:13.991047 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.091553 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.192091 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.293181 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.393794 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.494369 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.595444 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.696399 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.797451 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.897846 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:14 crc kubenswrapper[5113]: E0121 09:19:14.998468 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.099120 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.200114 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.300461 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.401102 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.502185 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.602873 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.703503 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.803671 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:15 crc kubenswrapper[5113]: E0121 09:19:15.903941 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.004852 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.105918 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.207136 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.307290 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.408248 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.508791 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.610076 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.710276 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.811358 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:16 crc kubenswrapper[5113]: E0121 09:19:16.912178 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.013071 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.113700 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.214608 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.314705 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.415725 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.516308 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.617678 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.718101 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.819303 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:17 crc kubenswrapper[5113]: E0121 09:19:17.919909 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.021415 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.121842 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.222241 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.322541 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.423441 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.524118 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.625460 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.726063 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.841160 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:18 crc kubenswrapper[5113]: I0121 09:19:18.842678 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:18 crc kubenswrapper[5113]: I0121 09:19:18.843603 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:18 crc kubenswrapper[5113]: I0121 09:19:18.843670 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:18 crc kubenswrapper[5113]: I0121 09:19:18.843683 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.844153 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:18 crc kubenswrapper[5113]: E0121 09:19:18.942290 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.043206 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.143441 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.243789 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.344002 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.444459 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.545808 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.646860 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.747625 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.848262 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:19 crc kubenswrapper[5113]: E0121 09:19:19.948561 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: I0121 09:19:20.035194 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.049046 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.149200 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.249883 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.350294 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.451176 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.552186 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.653413 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.754148 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: I0121 09:19:20.842857 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:20 crc kubenswrapper[5113]: I0121 09:19:20.843904 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:20 crc kubenswrapper[5113]: I0121 09:19:20.843975 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:20 crc kubenswrapper[5113]: I0121 09:19:20.843994 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.844837 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.854868 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.924122 5113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 09:19:20 crc kubenswrapper[5113]: E0121 09:19:20.955662 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.056599 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.157817 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.258283 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.358979 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.460111 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.560713 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.661532 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.762175 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.843406 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.843509 5113 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.844960 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.845021 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.845041 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.845077 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.845116 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.845139 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.845527 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.846344 5113 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 21 09:19:21 crc kubenswrapper[5113]: I0121 09:19:21.846705 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.847045 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.862566 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:21 crc kubenswrapper[5113]: E0121 09:19:21.963691 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.064785 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.165674 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.266349 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.367289 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.467729 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.568370 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.668907 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.769384 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.869860 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:22 crc kubenswrapper[5113]: E0121 09:19:22.969959 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.071162 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.171271 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.272318 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.352517 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.357716 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.357806 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.357825 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.357849 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.357868 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:23Z","lastTransitionTime":"2026-01-21T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.373842 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.378511 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.378578 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.378601 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.378629 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.378650 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:23Z","lastTransitionTime":"2026-01-21T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.394550 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.399407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.399465 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.399483 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.399506 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.399524 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:23Z","lastTransitionTime":"2026-01-21T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.420087 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.424704 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.425064 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.425210 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.425409 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:23 crc kubenswrapper[5113]: I0121 09:19:23.425556 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:23Z","lastTransitionTime":"2026-01-21T09:19:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.442047 5113 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400448Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861248Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"814c5727-ea8c-4a4c-99fd-0eb8e7b766cd\\\",\\\"systemUUID\\\":\\\"a84b16b3-46f1-4672-86d9-42da1a9b9cd6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.442291 5113 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.442347 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.543081 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.644294 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.744723 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.845230 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:23 crc kubenswrapper[5113]: E0121 09:19:23.945635 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.046315 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.147388 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.247798 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.348336 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.448994 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.549936 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.651153 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.751882 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.852483 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:24 crc kubenswrapper[5113]: E0121 09:19:24.953337 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.054477 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.155132 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.256258 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.356585 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.456838 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.557652 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.658729 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.759433 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.860458 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:25 crc kubenswrapper[5113]: E0121 09:19:25.961808 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.062256 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.163186 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.263512 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.364599 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.464715 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.565850 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.666962 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.767257 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.867725 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:26 crc kubenswrapper[5113]: E0121 09:19:26.968591 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.069837 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.170995 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.272372 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.373395 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.473818 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.574728 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.675826 5113 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.702229 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.777449 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.777479 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.777487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.777499 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.777508 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:27Z","lastTransitionTime":"2026-01-21T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.781133 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.791607 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.847030 5113 apiserver.go:52] "Watching apiserver" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.853121 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.853473 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-j8l6x","openshift-machine-config-operator/machine-config-daemon-7dhnt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk","openshift-multus/multus-additional-cni-plugins-8ss9n","openshift-multus/multus-vcw7s","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-fjwbv","openshift-etcd/etcd-crc","openshift-multus/network-metrics-daemon-tcv7n","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-node-qgkx4"] Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.854928 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.855052 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.855574 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.856133 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.856442 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.856798 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.859103 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.859151 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.859309 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.860155 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.860590 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.862411 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.863529 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.863836 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.870013 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.870058 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.870185 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.870267 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.878566 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.878946 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.878985 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.879530 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.883209 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.883255 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.883272 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.883295 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.883312 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:27Z","lastTransitionTime":"2026-01-21T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.884536 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.888249 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.890869 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.890891 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.891272 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.891510 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.892009 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.895519 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.895895 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.898192 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.898447 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.898579 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.901019 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.901181 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903045 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903122 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f7177e64-28e3-4b95-90ee-7d490e61bbb1-hosts-file\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903223 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-multus-certs\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903289 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903339 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903388 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-k8s-cni-cncf-io\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903439 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903487 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-cni-binary-copy\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903530 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-os-release\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903575 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-hostroot\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903614 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-etc-kubernetes\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903656 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gmv5\" (UniqueName: \"kubernetes.io/projected/11da35cd-b282-4537-ac8f-b6c86b18c21f-kube-api-access-5gmv5\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903699 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-conf-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903794 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903847 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-system-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903890 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-netns\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.903939 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-bin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904003 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904088 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904137 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7177e64-28e3-4b95-90ee-7d490e61bbb1-tmp-dir\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904179 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-cnibin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904235 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904284 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904330 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-socket-dir-parent\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904376 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-daemon-config\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904436 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904558 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904829 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.905017 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.904666 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-multus\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.906055 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.906545 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-kubelet\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.906607 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.906655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.906779 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.906904 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.406869377 +0000 UTC m=+97.907696466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908084 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908163 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908213 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vvsx\" (UniqueName: \"kubernetes.io/projected/f7177e64-28e3-4b95-90ee-7d490e61bbb1-kube-api-access-9vvsx\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.908356 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908508 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.908959 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.909489 5113 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.909682 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.409624701 +0000 UTC m=+97.910451770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.911169 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.911691 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.911894 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.911961 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.920113 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.920370 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.922017 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.922761 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.923092 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.925608 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.925768 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.926178 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.926364 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.926389 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.926406 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.926401 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.926494 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.426470932 +0000 UTC m=+97.927298001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.927894 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928023 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928230 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928493 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928530 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928580 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928497 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.928756 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.930822 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.931100 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.931346 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.931853 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.932271 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.932807 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.935508 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.935618 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.935712 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:27 crc kubenswrapper[5113]: E0121 09:19:27.935886 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.435868074 +0000 UTC m=+97.936695363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.937476 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.944234 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.945685 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.956819 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.966011 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.977618 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.981444 5113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.985783 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.985832 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.985842 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.985859 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.985870 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:27Z","lastTransitionTime":"2026-01-21T09:19:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.989478 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjwbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7177e64-28e3-4b95-90ee-7d490e61bbb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vvsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjwbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.996263 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 09:19:27 crc kubenswrapper[5113]: I0121 09:19:27.996729 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.002216 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcv7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d75af50-e19d-4048-b80e-51dae4c3378e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnh4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnh4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcv7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008616 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008665 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008697 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008729 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008783 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008810 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008849 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008872 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008894 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008919 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008943 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.008964 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009022 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009027 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009052 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009079 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009100 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009123 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009122 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009158 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009185 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009209 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009234 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009257 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009281 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009305 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009330 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009355 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009378 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009399 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009421 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009445 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009467 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009627 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009689 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.009895 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010173 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010187 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010241 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010267 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010317 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010353 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010387 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010414 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010422 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010461 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010495 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010536 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010581 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010613 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010649 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010679 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010716 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010802 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010850 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010898 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010951 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010996 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011038 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011084 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011132 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011176 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011217 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011253 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011287 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011337 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011399 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011453 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011504 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011554 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011625 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011664 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011700 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011741 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011812 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011846 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011880 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011919 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011958 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011993 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012029 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012062 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012100 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012137 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012170 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012217 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012250 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012283 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012316 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012349 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012383 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012417 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012458 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012494 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012534 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012572 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012610 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012643 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012678 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012714 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012812 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012854 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012888 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012923 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012959 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012995 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013029 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013065 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013099 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013138 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013179 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013216 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013253 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013287 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013327 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013361 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013398 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013433 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013467 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013512 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013549 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013624 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013668 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013704 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013740 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013803 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013842 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013879 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013919 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013959 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014001 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014044 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014082 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014120 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014159 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014195 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014231 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014268 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014303 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014339 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014374 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014413 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014455 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014491 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014529 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014565 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014605 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014645 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014681 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014859 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014928 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014984 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015030 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015068 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015104 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015148 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015188 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015226 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015266 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015307 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015344 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015388 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015467 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015511 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015555 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015598 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015645 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015686 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015736 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015805 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015848 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015886 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015929 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010535 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015973 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016016 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016056 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016098 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016144 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016244 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017000 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017040 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017063 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017082 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017102 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017120 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017144 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017165 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017185 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017382 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017405 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017424 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017443 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017465 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017485 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017533 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017553 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017576 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017596 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017617 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017635 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017680 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017722 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017752 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017775 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017793 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017811 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017831 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017851 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017870 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017891 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017912 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017932 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018613 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4af3bb76-a840-45dd-941d-0b6ef5883ed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgkx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020174 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020237 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020699 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020756 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021803 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021855 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021887 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022910 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022969 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023009 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023058 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023105 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023148 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023182 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023217 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023249 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023392 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023433 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023477 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-cni-binary-copy\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023510 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdp8h\" (UniqueName: \"kubernetes.io/projected/73ab8d16-75a8-4471-b540-95356246fbfa-kube-api-access-vdp8h\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023546 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-os-release\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023578 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-hostroot\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026070 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-etc-kubernetes\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026143 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026168 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/203383be-f153-4823-b8bd-410046a5821a-host\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026213 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gmv5\" (UniqueName: \"kubernetes.io/projected/11da35cd-b282-4537-ac8f-b6c86b18c21f-kube-api-access-5gmv5\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027065 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027161 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027211 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027253 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027342 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tqq\" (UniqueName: \"kubernetes.io/projected/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-kube-api-access-h8tqq\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027442 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-conf-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027588 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-system-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027654 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-netns\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027729 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-bin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027936 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027984 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7177e64-28e3-4b95-90ee-7d490e61bbb1-tmp-dir\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028034 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-cnibin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028083 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnh4x\" (UniqueName: \"kubernetes.io/projected/0d75af50-e19d-4048-b80e-51dae4c3378e-kube-api-access-nnh4x\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028222 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-socket-dir-parent\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028267 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-daemon-config\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028309 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028411 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-multus\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028453 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028583 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028688 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-kubelet\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028854 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028911 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-binary-copy\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028973 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029130 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029189 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029249 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-cnibin\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029306 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029490 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/203383be-f153-4823-b8bd-410046a5821a-serviceca\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029564 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-proxy-tls\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030607 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9vvsx\" (UniqueName: \"kubernetes.io/projected/f7177e64-28e3-4b95-90ee-7d490e61bbb1-kube-api-access-9vvsx\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030662 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030710 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030780 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f7177e64-28e3-4b95-90ee-7d490e61bbb1-hosts-file\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030859 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030977 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031010 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031050 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-rootfs\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031085 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-multus-certs\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031146 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-mcd-auth-proxy-config\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031187 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwzh4\" (UniqueName: \"kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031216 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9jb8\" (UniqueName: \"kubernetes.io/projected/203383be-f153-4823-b8bd-410046a5821a-kube-api-access-h9jb8\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031266 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-k8s-cni-cncf-io\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031296 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031323 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031353 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-659sf\" (UniqueName: \"kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031380 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-system-cni-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031431 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-os-release\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031457 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031578 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031596 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031611 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031627 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031646 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031661 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031675 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031707 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031721 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031741 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.034591 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-multus-certs\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.034752 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-k8s-cni-cncf-io\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035587 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-hostroot\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.036228 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010718 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.010791 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011175 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011330 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011352 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011706 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011764 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011938 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.011945 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012020 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012480 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012448 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012494 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012655 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012669 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.012899 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013098 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013108 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013200 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013472 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013715 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013872 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013877 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.013892 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014140 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014148 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014346 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014505 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014500 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014587 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014853 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014942 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014974 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015188 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.014718 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015385 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015486 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015565 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015576 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015619 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015907 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037032 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037050 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015954 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.015976 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016117 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016208 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016332 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016345 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016461 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016865 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016967 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.016982 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017089 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017131 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017444 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017617 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017812 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.017931 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018183 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018243 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018410 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018658 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018725 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018891 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.018910 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019186 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019202 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019292 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019313 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019362 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019628 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019832 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019878 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.019905 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020100 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020136 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020231 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020335 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020443 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020852 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.020886 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021008 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021009 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021291 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.021546 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022107 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022110 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022120 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022346 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022327 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022377 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022679 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022873 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.022881 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023120 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023360 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023471 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023644 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023675 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023917 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023978 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.023997 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025146 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025562 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025671 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025679 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025715 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.025981 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026163 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026158 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026172 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039158 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-socket-dir-parent\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037863 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-etc-kubernetes\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038666 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-cni-binary-copy\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039095 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-kubelet\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026457 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038867 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-os-release\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.026580 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027142 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027285 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.027618 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028826 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028928 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.028949 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029030 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029043 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029055 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029196 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029410 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029544 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029629 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.029729 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030154 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030193 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030530 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039304 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030661 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031547 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031594 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031846 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031882 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031899 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.031948 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.032123 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.032167 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.032180 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.032321 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.532290735 +0000 UTC m=+98.033117784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039414 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039455 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-multus\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039482 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-var-lib-cni-bin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039495 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-host-run-netns\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.032597 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.033090 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.033193 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.034549 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.034566 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.034654 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035072 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035088 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035216 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035253 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039790 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035392 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035940 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.035958 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.036130 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.036403 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037901 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037930 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.037977 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038278 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038432 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038465 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.038982 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039073 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039102 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039226 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.030583 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039922 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-daemon-config\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039999 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-cnibin\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040016 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.039830 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040108 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040148 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7177e64-28e3-4b95-90ee-7d490e61bbb1-tmp-dir\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040195 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-multus-conf-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040284 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f7177e64-28e3-4b95-90ee-7d490e61bbb1-hosts-file\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040331 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040364 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/11da35cd-b282-4537-ac8f-b6c86b18c21f-system-cni-dir\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040388 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040481 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040483 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.040713 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041076 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041109 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041479 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041551 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041665 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.041862 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.042120 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.042208 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.043552 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.043833 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.043851 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.044275 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.045052 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.046656 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.051424 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.051833 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.051906 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052248 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052336 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052487 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052660 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052695 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.053060 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.052956 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.053313 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.053415 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.053525 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.053869 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.054401 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gmv5\" (UniqueName: \"kubernetes.io/projected/11da35cd-b282-4537-ac8f-b6c86b18c21f-kube-api-access-5gmv5\") pod \"multus-vcw7s\" (UID: \"11da35cd-b282-4537-ac8f-b6c86b18c21f\") " pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.055299 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.056197 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.056566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vvsx\" (UniqueName: \"kubernetes.io/projected/f7177e64-28e3-4b95-90ee-7d490e61bbb1-kube-api-access-9vvsx\") pod \"node-resolver-fjwbv\" (UID: \"f7177e64-28e3-4b95-90ee-7d490e61bbb1\") " pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.065406 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.071698 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.074547 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.079577 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.081618 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71f14d8-6cde-42c3-b111-ea25041b1e7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://764cbe8f2707a2fadda1efee75054bf400af0119c406626759b367c3bd5b9b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b733dbfb86faee538fadad196e9f4133653e380f046a2a700481592da8080079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7744c08a2337b8278ce3b654e963924c4a470d102b593634ab6da80cfc6ab5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://721e8d92025d56350fd00228873a1f33f257d12ef3a712bc8a07ec9238a8a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c1502d2cb1898d8b79d4913673b05f8750b4eee5a387d50e0c69798a64c957b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.084747 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.088398 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.088432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.088442 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.088456 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.088466 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.094247 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.094282 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.095589 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56bbb3a4-33c5-4edf-b331-6c8de091efa8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T09:18:59Z\\\",\\\"message\\\":\\\"ar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 09:18:58.768487 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 09:18:58.768618 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 09:18:58.769260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-6316935/tls.crt::/tmp/serving-cert-6316935/tls.key\\\\\\\"\\\\nI0121 09:18:59.386895 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 09:18:59.389071 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 09:18:59.389085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 09:18:59.389110 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 09:18:59.389115 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 09:18:59.439316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 09:18:59.439362 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:18:59.439371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:18:59.439379 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0121 09:18:59.439377 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 09:18:59.439385 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 09:18:59.439410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 09:18:59.439422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 09:18:59.439912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T09:18:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.104985 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-vcw7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11da35cd-b282-4537-ac8f-b6c86b18c21f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gmv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcw7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.115195 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ab8d16-75a8-4471-b540-95356246fbfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8ss9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.121926 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j8l6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"203383be-f153-4823-b8bd-410046a5821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9jb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j8l6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.129768 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27afa170-d0be-48dd-a0d6-02a747bb8e63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwzh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwzh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-8hbvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132717 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132818 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132846 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/203383be-f153-4823-b8bd-410046a5821a-host\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132873 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132890 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132930 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132954 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132974 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/203383be-f153-4823-b8bd-410046a5821a-host\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.132999 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tqq\" (UniqueName: \"kubernetes.io/projected/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-kube-api-access-h8tqq\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133014 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133032 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnh4x\" (UniqueName: \"kubernetes.io/projected/0d75af50-e19d-4048-b80e-51dae4c3378e-kube-api-access-nnh4x\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133044 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133085 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133110 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133129 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133174 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133197 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133242 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133284 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-binary-copy\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133288 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133328 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133368 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133411 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133432 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-cnibin\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.133454 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.134483 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.134577 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.134914 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/203383be-f153-4823-b8bd-410046a5821a-serviceca\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.134976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-proxy-tls\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135052 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135126 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135157 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135163 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-binary-copy\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135213 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135249 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135271 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-rootfs\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135304 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-mcd-auth-proxy-config\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135376 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwzh4\" (UniqueName: \"kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135402 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h9jb8\" (UniqueName: \"kubernetes.io/projected/203383be-f153-4823-b8bd-410046a5821a-kube-api-access-h9jb8\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135445 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135469 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135494 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-659sf\" (UniqueName: \"kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135518 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-system-cni-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135540 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-os-release\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135565 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135592 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135669 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdp8h\" (UniqueName: \"kubernetes.io/projected/73ab8d16-75a8-4471-b540-95356246fbfa-kube-api-access-vdp8h\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135891 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135915 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135926 5113 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135936 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135945 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135959 5113 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135968 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135979 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135989 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136004 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136015 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136025 5113 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136034 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136048 5113 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136057 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136067 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136080 5113 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136089 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136097 5113 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136106 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136120 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136132 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136142 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136150 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136167 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136177 5113 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136189 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136202 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136220 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136235 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136245 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136258 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136269 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136278 5113 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136287 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136300 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136309 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136319 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136327 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136341 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136351 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136360 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136373 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136383 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136393 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136402 5113 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136415 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136424 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136433 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136446 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136458 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136468 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136478 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136487 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136501 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136511 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136520 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136533 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136542 5113 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136552 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136561 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136573 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136595 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136605 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136614 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136627 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136636 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136645 5113 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136658 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136666 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136676 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136685 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136697 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136707 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136715 5113 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136724 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136739 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136761 5113 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136772 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136780 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136792 5113 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136801 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136810 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136821 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136830 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136840 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136849 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136862 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136872 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136880 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136890 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136903 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136912 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136923 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136935 5113 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136938 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/203383be-f153-4823-b8bd-410046a5821a-serviceca\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136951 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136962 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.136971 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.135985 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.137011 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.137026 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.137031 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138099 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-os-release\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.138104 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.138263 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:19:28.638232752 +0000 UTC m=+98.139059801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138468 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138697 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-mcd-auth-proxy-config\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138716 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138700 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138772 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138783 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138783 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138808 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138857 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138863 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-system-cni-dir\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138842 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138934 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138952 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138965 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138985 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.138998 5113 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139011 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139024 5113 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139042 5113 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139059 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139073 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139085 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139086 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-rootfs\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139103 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139120 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139133 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139145 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139162 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139173 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139184 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139199 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139215 5113 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139226 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139238 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139256 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139272 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139286 5113 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139300 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139315 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139327 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139340 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139353 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139368 5113 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139380 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139425 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139442 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139454 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139469 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139482 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139501 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139513 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139534 5113 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139550 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139572 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139584 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139597 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139616 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/73ab8d16-75a8-4471-b540-95356246fbfa-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139628 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139679 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139696 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139720 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139756 5113 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139737 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139774 5113 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139811 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139819 5113 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139836 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139850 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139879 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139903 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139916 5113 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139929 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139942 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139960 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139972 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.139983 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140000 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140013 5113 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140025 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140038 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140054 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140067 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140080 5113 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140093 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140110 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140122 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140135 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140153 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140166 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140179 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140193 5113 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140209 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140222 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140235 5113 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140250 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140269 5113 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140286 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140298 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140310 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140352 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140366 5113 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140380 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140309 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140413 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140462 5113 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140476 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140489 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140503 5113 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140519 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140531 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140544 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140557 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140577 5113 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140591 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140603 5113 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140615 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140632 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140645 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140656 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140672 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140684 5113 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140696 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140709 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140727 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140759 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140772 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.140789 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.142626 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-proxy-tls\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.143146 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.144050 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.144655 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.146012 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73ab8d16-75a8-4471-b540-95356246fbfa-cnibin\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.153935 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnh4x\" (UniqueName: \"kubernetes.io/projected/0d75af50-e19d-4048-b80e-51dae4c3378e-kube-api-access-nnh4x\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.156055 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tqq\" (UniqueName: \"kubernetes.io/projected/46461c0d-1a9e-4b91-bf59-e8a11ee34bdd-kube-api-access-h8tqq\") pod \"machine-config-daemon-7dhnt\" (UID: \"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\") " pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.157428 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.158223 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdp8h\" (UniqueName: \"kubernetes.io/projected/73ab8d16-75a8-4471-b540-95356246fbfa-kube-api-access-vdp8h\") pod \"multus-additional-cni-plugins-8ss9n\" (UID: \"73ab8d16-75a8-4471-b540-95356246fbfa\") " pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.158647 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.160435 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9jb8\" (UniqueName: \"kubernetes.io/projected/203383be-f153-4823-b8bd-410046a5821a-kube-api-access-h9jb8\") pod \"node-ca-j8l6x\" (UID: \"203383be-f153-4823-b8bd-410046a5821a\") " pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.164516 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-659sf\" (UniqueName: \"kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf\") pod \"ovnkube-node-qgkx4\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.166890 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8tqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8tqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7dhnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.167300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwzh4\" (UniqueName: \"kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4\") pod \"ovnkube-control-plane-57b78d8988-8hbvk\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.174188 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjwbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7177e64-28e3-4b95-90ee-7d490e61bbb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vvsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjwbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.183373 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.192056 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.192098 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.192108 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.192122 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.192131 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.194130 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.197783 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.206959 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-a1b67cbdb9d5d86340f804f2918947a7ba08ece9c599858211e2c2a656b43877 WatchSource:0}: Error finding container a1b67cbdb9d5d86340f804f2918947a7ba08ece9c599858211e2c2a656b43877: Status 404 returned error can't find the container with id a1b67cbdb9d5d86340f804f2918947a7ba08ece9c599858211e2c2a656b43877 Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.208850 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-fjwbv" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.256577 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.266035 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vcw7s" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.275792 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.279481 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-25dc0d2c4191edc3f91519d98e414d9e876fffdf1e4cfe8fac6c60a3804d1869 WatchSource:0}: Error finding container 25dc0d2c4191edc3f91519d98e414d9e876fffdf1e4cfe8fac6c60a3804d1869: Status 404 returned error can't find the container with id 25dc0d2c4191edc3f91519d98e414d9e876fffdf1e4cfe8fac6c60a3804d1869 Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.286629 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-j8l6x" Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.289469 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11da35cd_b282_4537_ac8f_b6c86b18c21f.slice/crio-20369863d799f70e4869a8006dbab9b52d2359d0d7175d15dfea00e8afb7b544 WatchSource:0}: Error finding container 20369863d799f70e4869a8006dbab9b52d2359d0d7175d15dfea00e8afb7b544: Status 404 returned error can't find the container with id 20369863d799f70e4869a8006dbab9b52d2359d0d7175d15dfea00e8afb7b544 Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296286 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296322 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296332 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296346 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296356 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.296685 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.309121 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.313835 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod203383be_f153_4823_b8bd_410046a5821a.slice/crio-90124dfa3cb8a7853c6e05fd6133531333e5653cced3c81754f3ae108947a8db WatchSource:0}: Error finding container 90124dfa3cb8a7853c6e05fd6133531333e5653cced3c81754f3ae108947a8db: Status 404 returned error can't find the container with id 90124dfa3cb8a7853c6e05fd6133531333e5653cced3c81754f3ae108947a8db Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.316621 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73ab8d16_75a8_4471_b540_95356246fbfa.slice/crio-b652691a5a36432981db5e68ac518a09309f7e0534211bfecb8b8953e9a23814 WatchSource:0}: Error finding container b652691a5a36432981db5e68ac518a09309f7e0534211bfecb8b8953e9a23814: Status 404 returned error can't find the container with id b652691a5a36432981db5e68ac518a09309f7e0534211bfecb8b8953e9a23814 Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.329302 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4af3bb76_a840_45dd_941d_0b6ef5883ed8.slice/crio-a45e2e445afa3f2e6aa1828d0a90935a0ac6f11661a8aae3131f0361b3925386 WatchSource:0}: Error finding container a45e2e445afa3f2e6aa1828d0a90935a0ac6f11661a8aae3131f0361b3925386: Status 404 returned error can't find the container with id a45e2e445afa3f2e6aa1828d0a90935a0ac6f11661a8aae3131f0361b3925386 Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.330476 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27afa170_d0be_48dd_a0d6_02a747bb8e63.slice/crio-9b9f907156f8c01822a6be54ea96a5b1d6ed2f59ffc8c6150e8019af22a08ac6 WatchSource:0}: Error finding container 9b9f907156f8c01822a6be54ea96a5b1d6ed2f59ffc8c6150e8019af22a08ac6: Status 404 returned error can't find the container with id 9b9f907156f8c01822a6be54ea96a5b1d6ed2f59ffc8c6150e8019af22a08ac6 Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.338817 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:19:28 crc kubenswrapper[5113]: W0121 09:19:28.376982 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46461c0d_1a9e_4b91_bf59_e8a11ee34bdd.slice/crio-ed4b8c43efc8121d4e6a2bb79a380fb44164aedc567b589ae4b8eed5ceba8dce WatchSource:0}: Error finding container ed4b8c43efc8121d4e6a2bb79a380fb44164aedc567b589ae4b8eed5ceba8dce: Status 404 returned error can't find the container with id ed4b8c43efc8121d4e6a2bb79a380fb44164aedc567b589ae4b8eed5ceba8dce Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.398084 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.398133 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.398145 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.398163 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.398179 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.443361 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.443427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.443460 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443543 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443588 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443627 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443594 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443646 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.443648 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443668 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443695 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.44367819 +0000 UTC m=+98.944505249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443715 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443798 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.443718641 +0000 UTC m=+98.944545730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443834 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.443820634 +0000 UTC m=+98.944647793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.443883 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.444378 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.444356129 +0000 UTC m=+98.945183198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.496672 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerStarted","Data":"9b9f907156f8c01822a6be54ea96a5b1d6ed2f59ffc8c6150e8019af22a08ac6"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.498619 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j8l6x" event={"ID":"203383be-f153-4823-b8bd-410046a5821a","Type":"ContainerStarted","Data":"90124dfa3cb8a7853c6e05fd6133531333e5653cced3c81754f3ae108947a8db"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.500963 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.500989 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.500998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.501010 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.501021 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.501289 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerStarted","Data":"b652691a5a36432981db5e68ac518a09309f7e0534211bfecb8b8953e9a23814"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.503848 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"041fa44640e4678667f3c80d90ff360efcc698339d414b6046401bf4247341ec"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.503881 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"a1b67cbdb9d5d86340f804f2918947a7ba08ece9c599858211e2c2a656b43877"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.505515 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcw7s" event={"ID":"11da35cd-b282-4537-ac8f-b6c86b18c21f","Type":"ContainerStarted","Data":"20369863d799f70e4869a8006dbab9b52d2359d0d7175d15dfea00e8afb7b544"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.507328 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"25dc0d2c4191edc3f91519d98e414d9e876fffdf1e4cfe8fac6c60a3804d1869"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.512099 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjwbv" event={"ID":"f7177e64-28e3-4b95-90ee-7d490e61bbb1","Type":"ContainerStarted","Data":"7614a03cdf71a95c62cd94023349cf42c7b2177485fae82fb6426067c4da675e"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.513680 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"99e82e5faa934032f7c6e7386fe917b6feb0034932cfec69cc4c85136151bca9"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.513707 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"1b31d82890f45478b1c7b7b83b2ec51be65fc7699f9f767659a59b84aee579a6"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.516384 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"ed4b8c43efc8121d4e6a2bb79a380fb44164aedc567b589ae4b8eed5ceba8dce"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.527272 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bd12f7b-ba94-4e37-93de-26468bca0f0a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://03413fd6528d11a5ca1743e7b6d3b467b83b8013e06d4e7f02da3a81f5a3c159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3753f39d3e813e69237bc578baa42b6e2f7c1e1498ec995df75799be2050518e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3753f39d3e813e69237bc578baa42b6e2f7c1e1498ec995df75799be2050518e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.528598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"a45e2e445afa3f2e6aa1828d0a90935a0ac6f11661a8aae3131f0361b3925386"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.544679 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.544941 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.544919021 +0000 UTC m=+99.045746070 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.545783 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71f14d8-6cde-42c3-b111-ea25041b1e7e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://764cbe8f2707a2fadda1efee75054bf400af0119c406626759b367c3bd5b9b6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://b733dbfb86faee538fadad196e9f4133653e380f046a2a700481592da8080079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7744c08a2337b8278ce3b654e963924c4a470d102b593634ab6da80cfc6ab5ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://721e8d92025d56350fd00228873a1f33f257d12ef3a712bc8a07ec9238a8a021\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:56Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9c1502d2cb1898d8b79d4913673b05f8750b4eee5a387d50e0c69798a64c957b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:55Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388aba20f2513376eaf1b69444ab5c9be3a8b48690161caa6ec6c54c39def4d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4bab95cb7dacee322ad597eb1e0f7032a4198d682a2da0799e593d3b254862f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13402d612234ad12bec0ec95debbb81207d159bdc87db7ba5f63780a70c18d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.559708 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56bbb3a4-33c5-4edf-b331-6c8de091efa8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T09:18:59Z\\\",\\\"message\\\":\\\"ar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0121 09:18:58.768487 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0121 09:18:58.768618 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0121 09:18:58.769260 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-6316935/tls.crt::/tmp/serving-cert-6316935/tls.key\\\\\\\"\\\\nI0121 09:18:59.386895 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 09:18:59.389071 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 09:18:59.389085 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 09:18:59.389110 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 09:18:59.389115 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 09:18:59.439316 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 09:18:59.439362 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:18:59.439371 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 09:18:59.439379 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0121 09:18:59.439377 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0121 09:18:59.439385 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 09:18:59.439410 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 09:18:59.439422 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0121 09:18:59.439912 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T09:18:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.571651 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-vcw7s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11da35cd-b282-4537-ac8f-b6c86b18c21f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gmv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcw7s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.584211 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ab8d16-75a8-4471-b540-95356246fbfa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vdp8h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8ss9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.592036 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-j8l6x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"203383be-f153-4823-b8bd-410046a5821a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9jb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-j8l6x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.601433 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27afa170-d0be-48dd-a0d6-02a747bb8e63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwzh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwzh4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-8hbvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.608321 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.608345 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.608354 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.608368 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.608377 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.609995 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54716098-20bf-4221-9f2a-b0f2167ca6b5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://153e83973dd71f7b23b37e6143b3c9de9d118112045570d63b00cdc939edc29a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c0b4092be17370b0e45f24a8e79a48cd1549f4ef547228bd2995d316975bdd42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1a0c5d81cf6abf48ca564992e06379ca3f1d1890623591e6b86d4c79694e2f7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2848ce1074198a4b52b03d33de283d142451a392df5429e8ff195a46f6d0e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2848ce1074198a4b52b03d33de283d142451a392df5429e8ff195a46f6d0e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T09:17:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.619242 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://99e82e5faa934032f7c6e7386fe917b6feb0034932cfec69cc4c85136151bca9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:19:28Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.629596 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.640046 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8tqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8tqq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-7dhnt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.646037 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.646286 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: E0121 09:19:28.646525 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:19:29.646425729 +0000 UTC m=+99.147252808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.647699 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-fjwbv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7177e64-28e3-4b95-90ee-7d490e61bbb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9vvsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-fjwbv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.656198 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-tcv7n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d75af50-e19d-4048-b80e-51dae4c3378e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnh4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nnh4x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-tcv7n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.670685 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4af3bb76-a840-45dd-941d-0b6ef5883ed8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-659sf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:19:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qgkx4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.686036 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f00cdd-8dd2-479f-8e44-1eeefbb4c56f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:18:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T09:17:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c18a697677f607ab3265a3d04edbad68557370feb7ff27c2efe99d3180f75fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://8b462f33795ed36c96eb82d0605e3d0d75cda8a208712e5a08bbe1199b460457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f961975c0ec900635a901f00482c245a386ecd3f3e7dca899cecb812133ce940\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a4ec8a94add26d51ed633586f393082adc6e68c92b60b61a35848cc17f8f1b0c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T09:17:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T09:17:50Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.699067 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.710101 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.710240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.710321 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.710432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.710526 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.720413 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.748371 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.765591 5113 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T09:19:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.812164 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.812309 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.812407 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.812495 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.812591 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.847571 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.848558 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.849637 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.851216 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.853141 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.854771 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.856243 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.857716 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.858349 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.859619 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.860427 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.861818 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.862422 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.863892 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.864304 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.865044 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.866170 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.867302 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.868625 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.869420 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.870258 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.872348 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.873433 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.874648 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.875853 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.876656 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.877862 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.878861 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.881131 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.882207 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.883451 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.884929 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.886301 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.887653 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.888444 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.889369 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.890068 5113 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.890164 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.893007 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.893976 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.895626 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.896529 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.897027 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.898368 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.899226 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.899837 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.901019 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.901970 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.903863 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.904694 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.905985 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.907318 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.908864 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.910643 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.913593 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914709 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914732 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914754 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914765 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914778 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.914788 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:28Z","lastTransitionTime":"2026-01-21T09:19:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.918086 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 21 09:19:28 crc kubenswrapper[5113]: I0121 09:19:28.921685 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.017953 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.018299 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.018311 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.018333 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.018346 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.120260 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.120312 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.120324 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.120343 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.120358 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.222944 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.222981 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.222992 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.223006 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.223015 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.326044 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.326097 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.326115 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.326139 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.326158 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.429633 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.429730 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.429792 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.429817 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.429839 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.454832 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.454917 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.454989 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.455057 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455118 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455159 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455170 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455177 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455252 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.455230075 +0000 UTC m=+100.956057154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455279 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.455266486 +0000 UTC m=+100.956093565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455318 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455378 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.455350818 +0000 UTC m=+100.956177897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455419 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455439 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455457 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.455506 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.455490912 +0000 UTC m=+100.956318001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.537274 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"c8d57ed1a665a1b3b0f86711dac075c519ee0f3bdcfbc1d090def3dad2e094cf"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.541097 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcw7s" event={"ID":"11da35cd-b282-4537-ac8f-b6c86b18c21f","Type":"ContainerStarted","Data":"ae662f5c068ffc7d4f5b76b096303acd87660f6089e6945d659a7a22cdde9e4e"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.543908 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-fjwbv" event={"ID":"f7177e64-28e3-4b95-90ee-7d490e61bbb1","Type":"ContainerStarted","Data":"99809963376ee23e3f4e60539d78df43b71d71235087bc5aee5122820c89cf54"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.547567 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.547622 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"7d17dcb32a96369a06dcc2df64cb9fefcedaffbb772cab4d3e55898f62b9a7aa"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.552320 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.552394 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.552421 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.552453 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.552498 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.554409 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" exitCode=0 Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.554574 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.555798 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.555952 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.555916301 +0000 UTC m=+101.056743350 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.559640 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerStarted","Data":"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.559680 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerStarted","Data":"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.562171 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-j8l6x" event={"ID":"203383be-f153-4823-b8bd-410046a5821a","Type":"ContainerStarted","Data":"6ade75d5d65b46229b1495294ff931fb23153c9eb116ed0c9a50e2dd9bac24fc"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.564704 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="d13192f862bb1896fab230487d4c182b7a274cb6c09e100c1934a0342b32545f" exitCode=0 Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.564737 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"d13192f862bb1896fab230487d4c182b7a274cb6c09e100c1934a0342b32545f"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.595942 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.5959304420000002 podStartE2EDuration="2.595930442s" podCreationTimestamp="2026-01-21 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.578395813 +0000 UTC m=+99.079222862" watchObservedRunningTime="2026-01-21 09:19:29.595930442 +0000 UTC m=+99.096757491" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.654630 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.654659 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.654668 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.654679 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.654688 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.656634 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.658173 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.658261 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:19:31.658238001 +0000 UTC m=+101.159065140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.683631 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.6836092599999999 podStartE2EDuration="1.68360926s" podCreationTimestamp="2026-01-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.68323368 +0000 UTC m=+99.184060729" watchObservedRunningTime="2026-01-21 09:19:29.68360926 +0000 UTC m=+99.184436329" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.724922 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.724901106 podStartE2EDuration="2.724901106s" podCreationTimestamp="2026-01-21 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.719252684 +0000 UTC m=+99.220079733" watchObservedRunningTime="2026-01-21 09:19:29.724901106 +0000 UTC m=+99.225728155" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.761190 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.761236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.761248 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.761265 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.761277 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.826725 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.8267059799999998 podStartE2EDuration="1.82670598s" podCreationTimestamp="2026-01-21 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.813537648 +0000 UTC m=+99.314364697" watchObservedRunningTime="2026-01-21 09:19:29.82670598 +0000 UTC m=+99.327533029" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.842803 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.843232 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.842966 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.843324 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.843031 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.842915 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.843396 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:29 crc kubenswrapper[5113]: E0121 09:19:29.843527 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.863652 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.863701 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.863713 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.863734 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.863759 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.919060 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-fjwbv" podStartSLOduration=79.919044203 podStartE2EDuration="1m19.919044203s" podCreationTimestamp="2026-01-21 09:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.9185689 +0000 UTC m=+99.419395949" watchObservedRunningTime="2026-01-21 09:19:29.919044203 +0000 UTC m=+99.419871252" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.951801 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vcw7s" podStartSLOduration=78.95178912 podStartE2EDuration="1m18.95178912s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.951312517 +0000 UTC m=+99.452139596" watchObservedRunningTime="2026-01-21 09:19:29.95178912 +0000 UTC m=+99.452616169" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.965207 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.965251 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.965261 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.965278 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.965288 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:29Z","lastTransitionTime":"2026-01-21T09:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:29 crc kubenswrapper[5113]: I0121 09:19:29.985724 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-j8l6x" podStartSLOduration=78.985705088 podStartE2EDuration="1m18.985705088s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.985390189 +0000 UTC m=+99.486217248" watchObservedRunningTime="2026-01-21 09:19:29.985705088 +0000 UTC m=+99.486532137" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.012784 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podStartSLOduration=79.012762362 podStartE2EDuration="1m19.012762362s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:30.012481405 +0000 UTC m=+99.513308474" watchObservedRunningTime="2026-01-21 09:19:30.012762362 +0000 UTC m=+99.513589411" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.012900 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" podStartSLOduration=79.012893626 podStartE2EDuration="1m19.012893626s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:29.997205236 +0000 UTC m=+99.498032305" watchObservedRunningTime="2026-01-21 09:19:30.012893626 +0000 UTC m=+99.513720675" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.067594 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.067664 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.067680 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.067706 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.067722 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.174539 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.174587 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.174612 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.174633 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.174648 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.278490 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.278534 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.278545 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.278571 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.278583 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.380570 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.380627 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.380641 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.380661 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.380673 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.482432 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.482475 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.482487 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.482504 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.482515 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573292 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573341 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573350 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573360 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.573369 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.575217 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="3b930f5fa9a70afc2cbc76ce8504cabb1b07a9a75d3c73fb8f7831b2ba033e6b" exitCode=0 Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.575300 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"3b930f5fa9a70afc2cbc76ce8504cabb1b07a9a75d3c73fb8f7831b2ba033e6b"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.584555 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.584726 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.584898 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.585007 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.585104 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.688029 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.688071 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.688081 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.688096 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.688108 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.789455 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.789488 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.789498 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.789514 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.789526 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.891874 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.891926 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.891944 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.892008 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.892037 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.995258 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.995509 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.995518 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.995531 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:30 crc kubenswrapper[5113]: I0121 09:19:30.995540 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:30Z","lastTransitionTime":"2026-01-21T09:19:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.098240 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.098287 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.098300 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.098316 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.098330 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.202516 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.202561 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.202573 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.202587 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.202596 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.304948 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.305021 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.305047 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.305078 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.305100 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.406871 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.406935 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.406962 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.406983 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.406998 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.483279 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.483356 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.483400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.483459 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.483625 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.483651 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.483668 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.483775 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.483723157 +0000 UTC m=+104.984550236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484002 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484060 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484082 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484110 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484013 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484224 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.48419683 +0000 UTC m=+104.985023879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484249 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.484238021 +0000 UTC m=+104.985065190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.484317 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.484277752 +0000 UTC m=+104.985104831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.509158 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.509225 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.509243 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.509268 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.509286 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.581567 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="d7dc99b8d14190eb2c9161edca2c42ccea0d265197f671c05353bb53f336f3ee" exitCode=0 Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.581786 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"d7dc99b8d14190eb2c9161edca2c42ccea0d265197f671c05353bb53f336f3ee"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.583722 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.584153 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.584128275 +0000 UTC m=+105.084955364 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.584674 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"63469bda19b0a3ba548ac51ed0317b3ac0502bfd8f0785f8ec274f60682f0696"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.614181 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.614384 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.614451 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.614517 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.614579 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.685665 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.685908 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.685986 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:19:35.685967852 +0000 UTC m=+105.186794901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.716589 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.716629 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.716638 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.716653 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.716664 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.818560 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.818613 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.818631 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.818654 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.818672 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.843071 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.843225 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.843318 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.843410 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.843490 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.843576 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.843658 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:31 crc kubenswrapper[5113]: E0121 09:19:31.843774 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.920665 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.920709 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.920723 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.920759 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:31 crc kubenswrapper[5113]: I0121 09:19:31.920776 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:31Z","lastTransitionTime":"2026-01-21T09:19:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.023561 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.023981 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.024006 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.024029 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.024041 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.126446 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.126486 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.126496 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.126510 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.126520 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.229539 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.229609 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.229630 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.229654 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.229672 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.332491 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.332569 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.332596 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.332630 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.332662 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.435113 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.435174 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.435196 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.435223 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.435243 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.537809 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.537871 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.537888 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.537904 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.537914 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.594162 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="aed77cc83733c7b7621683f8b3222f966f549976c3078dd4440574bdb14998b0" exitCode=0 Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.594291 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"aed77cc83733c7b7621683f8b3222f966f549976c3078dd4440574bdb14998b0"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.601204 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.644017 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.644059 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.644069 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.644086 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.644097 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.746304 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.746357 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.746377 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.746439 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.746466 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.848074 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.848124 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.848135 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.848149 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.848159 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.950819 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.950855 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.950864 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.950877 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:32 crc kubenswrapper[5113]: I0121 09:19:32.950887 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:32Z","lastTransitionTime":"2026-01-21T09:19:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.062350 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.062412 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.062430 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.062455 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.062472 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.165188 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.165236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.165249 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.165267 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.165279 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.268183 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.268223 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.268236 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.268255 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.268269 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.371822 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.371874 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.371892 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.371915 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.371932 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.474345 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.474722 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.474944 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.475097 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.475222 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.577416 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.577818 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.577976 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.578150 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.578344 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.612952 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="e75e2c4eabdb72469d70be1b74cf4d305a39ee2d63a3f651a7ac490cc80f933d" exitCode=0 Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.613027 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"e75e2c4eabdb72469d70be1b74cf4d305a39ee2d63a3f651a7ac490cc80f933d"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.680198 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.680230 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.680239 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.680252 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.680260 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.750897 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.750956 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.750974 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.750998 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.751016 5113 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T09:19:33Z","lastTransitionTime":"2026-01-21T09:19:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.809356 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf"] Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.815393 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.817020 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.818393 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.818695 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.819446 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.843346 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:33 crc kubenswrapper[5113]: E0121 09:19:33.843511 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.843697 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:33 crc kubenswrapper[5113]: E0121 09:19:33.843856 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.843350 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:33 crc kubenswrapper[5113]: E0121 09:19:33.843947 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.843976 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:33 crc kubenswrapper[5113]: E0121 09:19:33.844031 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.850127 5113 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.858959 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.919692 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.919762 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.919808 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.919840 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:33 crc kubenswrapper[5113]: I0121 09:19:33.919918 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.020841 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.020909 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.020931 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.020947 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.020975 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.021471 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.021517 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.023372 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.033541 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.042871 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd8a4cbe-22f6-4a60-bddd-f48aa661174f-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-kwpjf\" (UID: \"cd8a4cbe-22f6-4a60-bddd-f48aa661174f\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.137897 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" Jan 21 09:19:34 crc kubenswrapper[5113]: W0121 09:19:34.160864 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8a4cbe_22f6_4a60_bddd_f48aa661174f.slice/crio-a262af56d499422ef58cad85adf9dc4c844260b071a378ed5b7e0e0316612434 WatchSource:0}: Error finding container a262af56d499422ef58cad85adf9dc4c844260b071a378ed5b7e0e0316612434: Status 404 returned error can't find the container with id a262af56d499422ef58cad85adf9dc4c844260b071a378ed5b7e0e0316612434 Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.624441 5113 generic.go:358] "Generic (PLEG): container finished" podID="73ab8d16-75a8-4471-b540-95356246fbfa" containerID="ba4f00cd967ebb62e95f93ef71b893cb17f54222f89869ed3c0cdc1a1898238f" exitCode=0 Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.624510 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerDied","Data":"ba4f00cd967ebb62e95f93ef71b893cb17f54222f89869ed3c0cdc1a1898238f"} Jan 21 09:19:34 crc kubenswrapper[5113]: I0121 09:19:34.626386 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" event={"ID":"cd8a4cbe-22f6-4a60-bddd-f48aa661174f","Type":"ContainerStarted","Data":"a262af56d499422ef58cad85adf9dc4c844260b071a378ed5b7e0e0316612434"} Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.539923 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.540610 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.540778 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.540933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.540146 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.541411 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.541388811 +0000 UTC m=+113.042215880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.541606 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.541709 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.541901 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542068 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.542052839 +0000 UTC m=+113.042879908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.541200 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542304 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542412 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542556 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.542540922 +0000 UTC m=+113.043367981 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542140 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.542900 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.542886451 +0000 UTC m=+113.043713520 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.631687 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerStarted","Data":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.632246 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.632319 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.632388 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.635721 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" event={"ID":"73ab8d16-75a8-4471-b540-95356246fbfa","Type":"ContainerStarted","Data":"eb0ae35db48bf774c7d6a4f666f283b0b3a99bb93e3e9c19c99c6e5e2bd8a81f"} Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.637333 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" event={"ID":"cd8a4cbe-22f6-4a60-bddd-f48aa661174f","Type":"ContainerStarted","Data":"a948a3a6c2c3d099a8a58d5414a64a886eb6d5cd041c889f613102af79c87fbb"} Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.641649 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.641950 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.641928883 +0000 UTC m=+113.142755932 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.661042 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.667048 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.669846 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podStartSLOduration=84.66982561 podStartE2EDuration="1m24.66982561s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:35.669215324 +0000 UTC m=+105.170042383" watchObservedRunningTime="2026-01-21 09:19:35.66982561 +0000 UTC m=+105.170652659" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.688405 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8ss9n" podStartSLOduration=84.688384027 podStartE2EDuration="1m24.688384027s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:35.687198105 +0000 UTC m=+105.188025154" watchObservedRunningTime="2026-01-21 09:19:35.688384027 +0000 UTC m=+105.189211076" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.703865 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-kwpjf" podStartSLOduration=84.703847921 podStartE2EDuration="1m24.703847921s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:35.702437863 +0000 UTC m=+105.203264932" watchObservedRunningTime="2026-01-21 09:19:35.703847921 +0000 UTC m=+105.204674970" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.742841 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.743069 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.743148 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:19:43.743129843 +0000 UTC m=+113.243956892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.842512 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.842559 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.842650 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.842686 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.842903 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:35 crc kubenswrapper[5113]: I0121 09:19:35.843047 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.843040 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:35 crc kubenswrapper[5113]: E0121 09:19:35.843121 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:37 crc kubenswrapper[5113]: I0121 09:19:37.104559 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcv7n"] Jan 21 09:19:37 crc kubenswrapper[5113]: I0121 09:19:37.105291 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:37 crc kubenswrapper[5113]: E0121 09:19:37.105465 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:37 crc kubenswrapper[5113]: I0121 09:19:37.843169 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:37 crc kubenswrapper[5113]: I0121 09:19:37.843195 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:37 crc kubenswrapper[5113]: I0121 09:19:37.843237 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:37 crc kubenswrapper[5113]: E0121 09:19:37.843322 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:37 crc kubenswrapper[5113]: E0121 09:19:37.843467 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:37 crc kubenswrapper[5113]: E0121 09:19:37.843557 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:38 crc kubenswrapper[5113]: I0121 09:19:38.849338 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:38 crc kubenswrapper[5113]: E0121 09:19:38.849483 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:38 crc kubenswrapper[5113]: I0121 09:19:38.850599 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:38 crc kubenswrapper[5113]: E0121 09:19:38.851332 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 21 09:19:39 crc kubenswrapper[5113]: I0121 09:19:39.842416 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:39 crc kubenswrapper[5113]: E0121 09:19:39.842562 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:39 crc kubenswrapper[5113]: I0121 09:19:39.843001 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:39 crc kubenswrapper[5113]: E0121 09:19:39.843074 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:39 crc kubenswrapper[5113]: I0121 09:19:39.843117 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:39 crc kubenswrapper[5113]: E0121 09:19:39.843172 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:40 crc kubenswrapper[5113]: I0121 09:19:40.844620 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:40 crc kubenswrapper[5113]: E0121 09:19:40.844872 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-tcv7n" podUID="0d75af50-e19d-4048-b80e-51dae4c3378e" Jan 21 09:19:41 crc kubenswrapper[5113]: I0121 09:19:41.843388 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:41 crc kubenswrapper[5113]: I0121 09:19:41.843481 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:41 crc kubenswrapper[5113]: E0121 09:19:41.843983 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 21 09:19:41 crc kubenswrapper[5113]: E0121 09:19:41.844018 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 21 09:19:41 crc kubenswrapper[5113]: I0121 09:19:41.843561 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:41 crc kubenswrapper[5113]: E0121 09:19:41.844661 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.408607 5113 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.408872 5113 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.460631 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.468997 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.469255 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.475216 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.475535 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.475241 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.476581 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bnjd9"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.480969 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.481289 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.481553 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.482187 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.487284 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.487536 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.487886 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-6cwdn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.488063 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.495862 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.501384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.502303 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.508368 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.508377 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.508562 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.508667 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.508799 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.509894 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.509928 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.510254 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.513321 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.514265 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.514491 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.514670 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.515706 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.515750 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.515789 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.515940 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.515711 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.516066 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.519934 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.520047 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.520117 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.522154 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.536818 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.546100 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550044 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550097 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550135 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550210 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550105 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550295 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550304 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550403 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550623 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550675 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.550887 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.551013 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552228 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-auth-proxy-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552282 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfs4r\" (UniqueName: \"kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552318 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-serving-cert\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552349 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552384 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-audit\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552432 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552465 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-encryption-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552491 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552513 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnj9q\" (UniqueName: \"kubernetes.io/projected/8483d143-98a0-4b97-af59-bf98eceb47cd-kube-api-access-qnj9q\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552562 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552588 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552620 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-image-import-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-audit-dir\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552664 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552687 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552708 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552757 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c97f377-41b0-44cb-a19b-fd80d93481e2-machine-approver-tls\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552792 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-client\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552858 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5wkf\" (UniqueName: \"kubernetes.io/projected/8c97f377-41b0-44cb-a19b-fd80d93481e2-kube-api-access-j5wkf\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.552879 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.555998 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.557709 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.560218 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qtl4r"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.561148 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.563193 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.563335 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.563456 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.563619 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.563895 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.564089 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.564809 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.565428 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567015 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567219 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567372 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567612 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567653 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.567264 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.570713 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.573042 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.573521 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.576133 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.576909 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.576951 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-b46m8"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577427 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577528 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577646 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577729 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577797 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577820 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577894 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577904 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577913 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.577995 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.578053 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.579548 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.579699 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.580138 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.582244 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.582384 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.586669 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.587651 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.601052 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nrprx"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.621117 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.621293 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.622034 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.623490 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2jmh4"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.623660 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.625350 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.629860 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-8v2mh"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.630384 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.630546 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.631413 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.631658 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.631839 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.631888 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632044 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632065 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632103 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632224 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632517 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.632531 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633575 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633721 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633893 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633906 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633942 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.633900 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.634011 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.634301 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.634466 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.634905 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.635147 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.635303 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.635637 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-qgt8d"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.636055 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.636075 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.636886 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.638057 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.638189 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.638215 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.640251 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.640515 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.640730 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vxj85"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.641169 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.641680 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.642146 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.645712 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-tbd7w"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.646331 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.651159 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.651451 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.651642 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653313 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-encryption-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653353 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653381 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qnj9q\" (UniqueName: \"kubernetes.io/projected/8483d143-98a0-4b97-af59-bf98eceb47cd-kube-api-access-qnj9q\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653410 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653456 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9041fd4-ea1a-453e-b9c6-efe382434cc0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653477 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcgls\" (UniqueName: \"kubernetes.io/projected/d9041fd4-ea1a-453e-b9c6-efe382434cc0-kube-api-access-lcgls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653500 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653524 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653544 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc029c59-845c-4021-be17-fe92d61a361f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653565 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818c11fe-b682-4b2c-9f47-dee838219e31-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653586 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653617 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-image-import-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653637 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653657 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-audit-dir\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653676 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653697 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af605d44-e53a-4c60-8372-384c58f82b2b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653720 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653760 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653784 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-config\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653805 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gv27\" (UniqueName: \"kubernetes.io/projected/6f837af6-e4b6-4cc8-a869-125d0646e747-kube-api-access-9gv27\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653808 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653824 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-config\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653848 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653869 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653915 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svpz5\" (UniqueName: \"kubernetes.io/projected/cc029c59-845c-4021-be17-fe92d61a361f-kube-api-access-svpz5\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653938 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653960 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9ph\" (UniqueName: \"kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.653987 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc029c59-845c-4021-be17-fe92d61a361f-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654010 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af605d44-e53a-4c60-8372-384c58f82b2b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654035 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654058 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818c11fe-b682-4b2c-9f47-dee838219e31-config\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654081 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654102 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c97f377-41b0-44cb-a19b-fd80d93481e2-machine-approver-tls\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654137 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654159 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-config\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654190 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-client\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654213 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654236 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47f4bfde-8b00-4a3c-b405-f928eda4dc04-serving-cert\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654264 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654284 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654317 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654340 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654374 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j5wkf\" (UniqueName: \"kubernetes.io/projected/8c97f377-41b0-44cb-a19b-fd80d93481e2-kube-api-access-j5wkf\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654398 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85968dd-28ce-49c6-b8bf-ac62d18452b4-serving-cert\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654419 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f837af6-e4b6-4cc8-a869-125d0646e747-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654438 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af605d44-e53a-4c60-8372-384c58f82b2b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654460 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvw2\" (UniqueName: \"kubernetes.io/projected/e4a07741-f83d-4dd4-b52c-1e55f3629eb1-kube-api-access-pvvw2\") pod \"downloads-747b44746d-6cwdn\" (UID: \"e4a07741-f83d-4dd4-b52c-1e55f3629eb1\") " pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654484 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654508 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654531 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654555 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654585 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-auth-proxy-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654609 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af605d44-e53a-4c60-8372-384c58f82b2b-config\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654641 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfs4r\" (UniqueName: \"kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654665 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654688 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-images\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654716 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-serving-cert\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654766 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654790 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc029c59-845c-4021-be17-fe92d61a361f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654817 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26d8w\" (UniqueName: \"kubernetes.io/projected/818c11fe-b682-4b2c-9f47-dee838219e31-kube-api-access-26d8w\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654846 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlth5\" (UniqueName: \"kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654869 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654896 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-audit\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654919 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654954 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vlqc\" (UniqueName: \"kubernetes.io/projected/47f4bfde-8b00-4a3c-b405-f928eda4dc04-kube-api-access-4vlqc\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.654988 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.655009 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-trusted-ca\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.655043 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-599c4\" (UniqueName: \"kubernetes.io/projected/b85968dd-28ce-49c6-b8bf-ac62d18452b4-kube-api-access-599c4\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.656297 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-node-pullsecrets\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.657286 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.665244 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.668056 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.669021 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-encryption-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.673749 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.674610 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8483d143-98a0-4b97-af59-bf98eceb47cd-audit-dir\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.674811 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675085 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-etcd-client\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675403 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675459 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675765 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675970 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.675988 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c97f377-41b0-44cb-a19b-fd80d93481e2-auth-proxy-config\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.676497 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-audit\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.676512 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-image-import-ca\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.676637 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8483d143-98a0-4b97-af59-bf98eceb47cd-config\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.676673 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.677877 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.678249 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.678839 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8483d143-98a0-4b97-af59-bf98eceb47cd-serving-cert\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.681010 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c97f377-41b0-44cb-a19b-fd80d93481e2-machine-approver-tls\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.682617 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.687058 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8gwvh"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.687117 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.687223 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.689680 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.689815 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.690900 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.691822 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.692097 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.694983 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.695151 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.696927 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.697030 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.700213 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.700634 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.708184 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.708321 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.711552 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.711559 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.712093 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.720479 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.720508 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qtl4r"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.720519 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.720665 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.723270 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-6cwdn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.723294 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-pj5xv"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.724202 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.725835 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.725856 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.726006 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.728201 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.728229 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.728240 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.728366 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.732530 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.733588 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.733615 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.733628 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.733984 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.736542 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdz8l"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.736664 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.740989 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741013 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nrprx"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741025 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bnjd9"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741037 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741049 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741061 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741078 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741089 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741099 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741119 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-f9nd8"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.741120 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.743935 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.743959 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.743969 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-8v2mh"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.743979 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2jmh4"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.744047 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-pj5xv"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.744059 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.744071 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-7lgzg"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.744116 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.746937 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-b46m8"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.746964 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qgt8d"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.746975 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.746985 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vxj85"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.746994 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747003 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747011 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747016 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747020 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747159 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747170 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747180 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747189 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8gwvh"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747198 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7lgzg"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747206 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747218 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdz8l"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.747232 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q9t2j"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.751427 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.751767 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-648hm"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.752006 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.755872 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9t2j"] Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.755976 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756228 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-client\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756268 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756294 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493bf246-7422-45bb-a74f-34f4314445de-serving-cert\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756334 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756449 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-config\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756485 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9041fd4-ea1a-453e-b9c6-efe382434cc0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756504 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcgls\" (UniqueName: \"kubernetes.io/projected/d9041fd4-ea1a-453e-b9c6-efe382434cc0-kube-api-access-lcgls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756525 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc029c59-845c-4021-be17-fe92d61a361f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756543 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818c11fe-b682-4b2c-9f47-dee838219e31-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756563 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.756580 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-metrics-certs\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757130 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65764ab8-1117-4c6d-9af3-b8665ebeac26-tmp-dir\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757228 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757265 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757293 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsdhr\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-kube-api-access-qsdhr\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757301 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757331 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757394 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af605d44-e53a-4c60-8372-384c58f82b2b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757415 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757431 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-config\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757754 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/af605d44-e53a-4c60-8372-384c58f82b2b-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757833 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757812 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9gv27\" (UniqueName: \"kubernetes.io/projected/6f837af6-e4b6-4cc8-a869-125d0646e747-kube-api-access-9gv27\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757873 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-config\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpqr\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-kube-api-access-cjpqr\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757913 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757931 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757950 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-svpz5\" (UniqueName: \"kubernetes.io/projected/cc029c59-845c-4021-be17-fe92d61a361f-kube-api-access-svpz5\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.757997 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-oauth-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758198 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758248 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gj9ph\" (UniqueName: \"kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758352 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc029c59-845c-4021-be17-fe92d61a361f-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758388 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af605d44-e53a-4c60-8372-384c58f82b2b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758451 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-config\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758473 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758558 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-config\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758568 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818c11fe-b682-4b2c-9f47-dee838219e31-config\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758594 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758616 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e718db5-bd36-400d-8121-5afc39eb6777-service-ca-bundle\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758672 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758690 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-config\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758706 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0907a3e-c31f-491e-ac86-e289dd5d426a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vzk\" (UniqueName: \"kubernetes.io/projected/65764ab8-1117-4c6d-9af3-b8665ebeac26-kube-api-access-74vzk\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758765 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758788 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47f4bfde-8b00-4a3c-b405-f928eda4dc04-serving-cert\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758819 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758840 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758861 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758877 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-default-certificate\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758892 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-stats-auth\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758906 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-tmp-dir\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.758941 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759020 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85968dd-28ce-49c6-b8bf-ac62d18452b4-serving-cert\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759038 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f837af6-e4b6-4cc8-a869-125d0646e747-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759053 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af605d44-e53a-4c60-8372-384c58f82b2b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759069 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvw2\" (UniqueName: \"kubernetes.io/projected/e4a07741-f83d-4dd4-b52c-1e55f3629eb1-kube-api-access-pvvw2\") pod \"downloads-747b44746d-6cwdn\" (UID: \"e4a07741-f83d-4dd4-b52c-1e55f3629eb1\") " pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759087 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759171 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759198 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759217 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-console-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759232 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64bl\" (UniqueName: \"kubernetes.io/projected/34bfafc0-b014-412d-8524-50aeb30d19ae-kube-api-access-x64bl\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759248 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4844g\" (UniqueName: \"kubernetes.io/projected/d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a-kube-api-access-4844g\") pod \"migrator-866fcbc849-f8w74\" (UID: \"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759255 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc029c59-845c-4021-be17-fe92d61a361f-config\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759263 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759293 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759341 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af605d44-e53a-4c60-8372-384c58f82b2b-config\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759370 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-serving-cert\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759393 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759430 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759451 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/818c11fe-b682-4b2c-9f47-dee838219e31-config\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759456 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-images\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.759878 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.760085 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af605d44-e53a-4c60-8372-384c58f82b2b-config\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.760102 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.760493 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.760519 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-config\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.760954 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761238 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc029c59-845c-4021-be17-fe92d61a361f-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761552 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d9041fd4-ea1a-453e-b9c6-efe382434cc0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761659 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761746 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/493bf246-7422-45bb-a74f-34f4314445de-available-featuregates\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761809 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc029c59-845c-4021-be17-fe92d61a361f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.761980 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7kx\" (UniqueName: \"kubernetes.io/projected/6e718db5-bd36-400d-8121-5afc39eb6777-kube-api-access-jm7kx\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762120 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75tts\" (UniqueName: \"kubernetes.io/projected/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-kube-api-access-75tts\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762233 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762303 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc029c59-845c-4021-be17-fe92d61a361f-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762299 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762428 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762556 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26d8w\" (UniqueName: \"kubernetes.io/projected/818c11fe-b682-4b2c-9f47-dee838219e31-kube-api-access-26d8w\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762672 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762621 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762558 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d9041fd4-ea1a-453e-b9c6-efe382434cc0-images\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762677 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vlth5\" (UniqueName: \"kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762804 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762830 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-service-ca\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-oauth-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762873 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0907a3e-c31f-491e-ac86-e289dd5d426a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762893 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65764ab8-1117-4c6d-9af3-b8665ebeac26-metrics-tls\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762922 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762951 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-trusted-ca-bundle\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.762976 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljgb8\" (UniqueName: \"kubernetes.io/projected/493bf246-7422-45bb-a74f-34f4314445de-kube-api-access-ljgb8\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763025 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4vlqc\" (UniqueName: \"kubernetes.io/projected/47f4bfde-8b00-4a3c-b405-f928eda4dc04-kube-api-access-4vlqc\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763059 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-trusted-ca\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763122 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-599c4\" (UniqueName: \"kubernetes.io/projected/b85968dd-28ce-49c6-b8bf-ac62d18452b4-kube-api-access-599c4\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763544 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47f4bfde-8b00-4a3c-b405-f928eda4dc04-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763791 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b85968dd-28ce-49c6-b8bf-ac62d18452b4-trusted-ca\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.763863 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/818c11fe-b682-4b2c-9f47-dee838219e31-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.764361 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.764521 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.764905 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.765477 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f837af6-e4b6-4cc8-a869-125d0646e747-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.765586 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b85968dd-28ce-49c6-b8bf-ac62d18452b4-serving-cert\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.765806 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.766049 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af605d44-e53a-4c60-8372-384c58f82b2b-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.766087 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47f4bfde-8b00-4a3c-b405-f928eda4dc04-serving-cert\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.766329 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.767155 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.772410 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.791830 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.811427 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.840806 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.845070 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.863891 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864705 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864773 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/493bf246-7422-45bb-a74f-34f4314445de-available-featuregates\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864795 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jm7kx\" (UniqueName: \"kubernetes.io/projected/6e718db5-bd36-400d-8121-5afc39eb6777-kube-api-access-jm7kx\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864811 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75tts\" (UniqueName: \"kubernetes.io/projected/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-kube-api-access-75tts\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864827 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-service-ca\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864859 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-oauth-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864898 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0907a3e-c31f-491e-ac86-e289dd5d426a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864913 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65764ab8-1117-4c6d-9af3-b8665ebeac26-metrics-tls\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.864979 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-trusted-ca-bundle\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865014 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljgb8\" (UniqueName: \"kubernetes.io/projected/493bf246-7422-45bb-a74f-34f4314445de-kube-api-access-ljgb8\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865042 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-client\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865061 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493bf246-7422-45bb-a74f-34f4314445de-serving-cert\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865078 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865104 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-config\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865135 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-metrics-certs\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865151 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65764ab8-1117-4c6d-9af3-b8665ebeac26-tmp-dir\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865175 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865192 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qsdhr\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-kube-api-access-qsdhr\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865218 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpqr\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-kube-api-access-cjpqr\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865233 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865249 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-oauth-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865266 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865293 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e718db5-bd36-400d-8121-5afc39eb6777-service-ca-bundle\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865321 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0907a3e-c31f-491e-ac86-e289dd5d426a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865338 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-74vzk\" (UniqueName: \"kubernetes.io/projected/65764ab8-1117-4c6d-9af3-b8665ebeac26-kube-api-access-74vzk\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865347 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/493bf246-7422-45bb-a74f-34f4314445de-available-featuregates\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865365 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865381 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-default-certificate\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865402 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-stats-auth\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865421 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-tmp-dir\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865454 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-console-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865471 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x64bl\" (UniqueName: \"kubernetes.io/projected/34bfafc0-b014-412d-8524-50aeb30d19ae-kube-api-access-x64bl\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865487 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4844g\" (UniqueName: \"kubernetes.io/projected/d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a-kube-api-access-4844g\") pod \"migrator-866fcbc849-f8w74\" (UID: \"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865503 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865522 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-serving-cert\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865539 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.865954 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.866359 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-tmp\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.866479 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/65764ab8-1117-4c6d-9af3-b8665ebeac26-tmp-dir\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867058 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-console-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867349 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-trusted-ca-bundle\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867383 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-tmp-dir\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867401 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-oauth-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867592 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867879 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0907a3e-c31f-491e-ac86-e289dd5d426a-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.867934 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34bfafc0-b014-412d-8524-50aeb30d19ae-service-ca\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.869546 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.869831 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/65764ab8-1117-4c6d-9af3-b8665ebeac26-metrics-tls\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.870094 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-serving-cert\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.870601 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/493bf246-7422-45bb-a74f-34f4314445de-serving-cert\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.871334 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.872105 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34bfafc0-b014-412d-8524-50aeb30d19ae-console-oauth-config\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.892043 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.899954 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f0907a3e-c31f-491e-ac86-e289dd5d426a-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.912606 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.932600 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.951662 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.971189 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.983221 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-serving-cert\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.991130 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 09:19:42 crc kubenswrapper[5113]: I0121 09:19:42.998977 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-client\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.011806 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.031764 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.036806 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-config\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.051249 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.056345 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.072833 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.078025 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-etcd-service-ca\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.091903 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.132108 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.152510 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.172130 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.182937 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-default-certificate\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.192761 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.200144 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-stats-auth\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.212008 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.232518 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.242236 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e718db5-bd36-400d-8121-5afc39eb6777-metrics-certs\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.252109 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.257282 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e718db5-bd36-400d-8121-5afc39eb6777-service-ca-bundle\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.292073 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.295717 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnj9q\" (UniqueName: \"kubernetes.io/projected/8483d143-98a0-4b97-af59-bf98eceb47cd-kube-api-access-qnj9q\") pod \"apiserver-9ddfb9f55-bnjd9\" (UID: \"8483d143-98a0-4b97-af59-bf98eceb47cd\") " pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.312059 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.331944 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.372415 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5wkf\" (UniqueName: \"kubernetes.io/projected/8c97f377-41b0-44cb-a19b-fd80d93481e2-kube-api-access-j5wkf\") pod \"machine-approver-54c688565-hcv8d\" (UID: \"8c97f377-41b0-44cb-a19b-fd80d93481e2\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.390886 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfs4r\" (UniqueName: \"kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r\") pod \"controller-manager-65b6cccf98-twjhp\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.391849 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.412024 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.416292 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.432132 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.452477 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.453205 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.476217 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.493507 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.513769 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.533666 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.552017 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.575718 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.577457 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.577632 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.577664 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.577682 5113 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.577778 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:59.577754446 +0000 UTC m=+129.078581505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.578127 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.578259 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.578369 5113 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.578403 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.578516 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:59.578475535 +0000 UTC m=+129.079302634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.578573 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.583539 5113 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.583579 5113 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.578609 5113 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.583711 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:59.583667904 +0000 UTC m=+129.084495003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.583952 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-21 09:19:59.583924581 +0000 UTC m=+129.084751670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.591965 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.611863 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.631824 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.651810 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.670947 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.682690 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" event={"ID":"8c97f377-41b0-44cb-a19b-fd80d93481e2","Type":"ContainerStarted","Data":"7b7303c0e39c4f2fe79fe1d3914eeb9bc02a0b3d21de0235d3125b87ea939120"} Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.684183 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:43 crc kubenswrapper[5113]: E0121 09:19:43.684444 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:59.684383211 +0000 UTC m=+129.185210290 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.692840 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.710618 5113 request.go:752] "Waited before sending request" delay="1.015232743s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.713499 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.732199 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.751376 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.773135 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.785821 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.792067 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.817455 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.832444 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.843551 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.845162 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.848533 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.852331 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.872888 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.891331 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.897453 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-bnjd9"] Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.909764 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.917366 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.931475 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.952854 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.971889 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 09:19:43 crc kubenswrapper[5113]: I0121 09:19:43.991450 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.012085 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.032567 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.052142 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.072081 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.091640 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.111252 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.131900 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.152606 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.172051 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.192238 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.211337 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.231220 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.252149 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.272455 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.291782 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.311552 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.331575 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.351755 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.371756 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.391139 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.412892 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.432076 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.451524 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.472350 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.491386 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.511492 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.531583 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.552148 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.572901 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.592438 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.611697 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.632274 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.651656 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.672698 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.687573 5113 generic.go:358] "Generic (PLEG): container finished" podID="8483d143-98a0-4b97-af59-bf98eceb47cd" containerID="411d840ee53bd8194ee0c105d1b95a1c7880f68a149640cc3017fede62ab1e45" exitCode=0 Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.687671 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" event={"ID":"8483d143-98a0-4b97-af59-bf98eceb47cd","Type":"ContainerDied","Data":"411d840ee53bd8194ee0c105d1b95a1c7880f68a149640cc3017fede62ab1e45"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.687899 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" event={"ID":"8483d143-98a0-4b97-af59-bf98eceb47cd","Type":"ContainerStarted","Data":"c20191a44f3119ba797e4529b07b4352c2634f560a37cb05290433c0e66fcb56"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.689375 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" event={"ID":"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f","Type":"ContainerStarted","Data":"5c7ee8834db6fe5e833de7fb434689b809fdfdb3a947d516c14cf77fcd5e9894"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.689423 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" event={"ID":"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f","Type":"ContainerStarted","Data":"3c5e3aaa1ceba4ea49363799472008fcc15faadd0a2301228e7c90227cb0d18b"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.689451 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.691417 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" event={"ID":"8c97f377-41b0-44cb-a19b-fd80d93481e2","Type":"ContainerStarted","Data":"6beea65f57cafc171a4aebe846344f2d56f7c3042c0dab0ab92c396a3c741a07"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.691541 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" event={"ID":"8c97f377-41b0-44cb-a19b-fd80d93481e2","Type":"ContainerStarted","Data":"027befda5d11a6d780825c0d05c867723f14409bb3ace4d42e63c3b4894d6a39"} Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.721353 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcgls\" (UniqueName: \"kubernetes.io/projected/d9041fd4-ea1a-453e-b9c6-efe382434cc0-kube-api-access-lcgls\") pod \"machine-api-operator-755bb95488-qtl4r\" (UID: \"d9041fd4-ea1a-453e-b9c6-efe382434cc0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.730419 5113 request.go:752] "Waited before sending request" delay="1.972061172s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.746808 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-svpz5\" (UniqueName: \"kubernetes.io/projected/cc029c59-845c-4021-be17-fe92d61a361f-kube-api-access-svpz5\") pod \"openshift-controller-manager-operator-686468bdd5-qbp2v\" (UID: \"cc029c59-845c-4021-be17-fe92d61a361f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.749818 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj9ph\" (UniqueName: \"kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph\") pod \"route-controller-manager-776cdc94d6-wcvvf\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.765609 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36884: no serving certificate available for the kubelet" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.772114 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af605d44-e53a-4c60-8372-384c58f82b2b-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-ggq6c\" (UID: \"af605d44-e53a-4c60-8372-384c58f82b2b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:44 crc kubenswrapper[5113]: E0121 09:19:44.786793 5113 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Jan 21 09:19:44 crc kubenswrapper[5113]: E0121 09:19:44.786931 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs podName:0d75af50-e19d-4048-b80e-51dae4c3378e nodeName:}" failed. No retries permitted until 2026-01-21 09:20:00.7869076 +0000 UTC m=+130.287734669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs") pod "network-metrics-daemon-tcv7n" (UID: "0d75af50-e19d-4048-b80e-51dae4c3378e") : failed to sync secret cache: timed out waiting for the condition Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.787195 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36888: no serving certificate available for the kubelet" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.790350 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gv27\" (UniqueName: \"kubernetes.io/projected/6f837af6-e4b6-4cc8-a869-125d0646e747-kube-api-access-9gv27\") pod \"cluster-samples-operator-6b564684c8-hr5w7\" (UID: \"6f837af6-e4b6-4cc8-a869-125d0646e747\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.801082 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.809324 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvw2\" (UniqueName: \"kubernetes.io/projected/e4a07741-f83d-4dd4-b52c-1e55f3629eb1-kube-api-access-pvvw2\") pod \"downloads-747b44746d-6cwdn\" (UID: \"e4a07741-f83d-4dd4-b52c-1e55f3629eb1\") " pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.809450 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.821397 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.825153 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26d8w\" (UniqueName: \"kubernetes.io/projected/818c11fe-b682-4b2c-9f47-dee838219e31-kube-api-access-26d8w\") pod \"openshift-apiserver-operator-846cbfc458-78rvb\" (UID: \"818c11fe-b682-4b2c-9f47-dee838219e31\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.836368 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.842744 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.859431 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vlqc\" (UniqueName: \"kubernetes.io/projected/47f4bfde-8b00-4a3c-b405-f928eda4dc04-kube-api-access-4vlqc\") pod \"authentication-operator-7f5c659b84-4nhkn\" (UID: \"47f4bfde-8b00-4a3c-b405-f928eda4dc04\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.868787 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.870998 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-599c4\" (UniqueName: \"kubernetes.io/projected/b85968dd-28ce-49c6-b8bf-ac62d18452b4-kube-api-access-599c4\") pod \"console-operator-67c89758df-b46m8\" (UID: \"b85968dd-28ce-49c6-b8bf-ac62d18452b4\") " pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.871134 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36898: no serving certificate available for the kubelet" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.889123 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlth5\" (UniqueName: \"kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5\") pod \"oauth-openshift-66458b6674-f74j6\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.891911 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.913082 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.947705 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm7kx\" (UniqueName: \"kubernetes.io/projected/6e718db5-bd36-400d-8121-5afc39eb6777-kube-api-access-jm7kx\") pod \"router-default-68cf44c8b8-tbd7w\" (UID: \"6e718db5-bd36-400d-8121-5afc39eb6777\") " pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.969641 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36902: no serving certificate available for the kubelet" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.975393 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75tts\" (UniqueName: \"kubernetes.io/projected/98edd4b9-a5db-4e54-b89a-07ea8d30ea38-kube-api-access-75tts\") pod \"etcd-operator-69b85846b6-vxj85\" (UID: \"98edd4b9-a5db-4e54-b89a-07ea8d30ea38\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:44 crc kubenswrapper[5113]: I0121 09:19:44.997027 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.014648 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36916: no serving certificate available for the kubelet" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.015801 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljgb8\" (UniqueName: \"kubernetes.io/projected/493bf246-7422-45bb-a74f-34f4314445de-kube-api-access-ljgb8\") pod \"openshift-config-operator-5777786469-8v2mh\" (UID: \"493bf246-7422-45bb-a74f-34f4314445de\") " pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.023346 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.035619 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsdhr\" (UniqueName: \"kubernetes.io/projected/fb37dc5c-fa1d-4f5c-a331-e4855245f95f-kube-api-access-qsdhr\") pod \"cluster-image-registry-operator-86c45576b9-q8ftx\" (UID: \"fb37dc5c-fa1d-4f5c-a331-e4855245f95f\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.042263 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.044097 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36926: no serving certificate available for the kubelet" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.050946 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.051110 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.076031 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-74vzk\" (UniqueName: \"kubernetes.io/projected/65764ab8-1117-4c6d-9af3-b8665ebeac26-kube-api-access-74vzk\") pod \"dns-operator-799b87ffcd-2jmh4\" (UID: \"65764ab8-1117-4c6d-9af3-b8665ebeac26\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.076558 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.090158 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4844g\" (UniqueName: \"kubernetes.io/projected/d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a-kube-api-access-4844g\") pod \"migrator-866fcbc849-f8w74\" (UID: \"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.109721 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.115829 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x64bl\" (UniqueName: \"kubernetes.io/projected/34bfafc0-b014-412d-8524-50aeb30d19ae-kube-api-access-x64bl\") pod \"console-64d44f6ddf-qgt8d\" (UID: \"34bfafc0-b014-412d-8524-50aeb30d19ae\") " pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.128243 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpqr\" (UniqueName: \"kubernetes.io/projected/f0907a3e-c31f-491e-ac86-e289dd5d426a-kube-api-access-cjpqr\") pod \"ingress-operator-6b9cb4dbcf-fdg8l\" (UID: \"f0907a3e-c31f-491e-ac86-e289dd5d426a\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.128710 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36942: no serving certificate available for the kubelet" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.128916 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.129026 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.149973 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.153623 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-qtl4r"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.158952 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.171365 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 09:19:45 crc kubenswrapper[5113]: W0121 09:19:45.175505 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod579c5f31_2382_48db_8f59_60a7ed0827ed.slice/crio-05f6c9eca8f7e96818eab8e413fc30407384ca9e54c5255444c801baf0d572f0 WatchSource:0}: Error finding container 05f6c9eca8f7e96818eab8e413fc30407384ca9e54c5255444c801baf0d572f0: Status 404 returned error can't find the container with id 05f6c9eca8f7e96818eab8e413fc30407384ca9e54c5255444c801baf0d572f0 Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.180758 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.183198 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.187492 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.194066 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.194114 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.201309 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.207975 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.213475 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216002 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216296 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216316 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-client\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216333 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-encryption-config\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216348 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-dir\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216643 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztqpt\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216693 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216847 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgtf\" (UniqueName: \"kubernetes.io/projected/be5020dc-d64c-44bf-b6fa-10f148f9f046-kube-api-access-sbgtf\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216865 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/688053cc-c60a-4778-8b7b-f79c508916fa-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.216926 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282qj\" (UniqueName: \"kubernetes.io/projected/688053cc-c60a-4778-8b7b-f79c508916fa-kube-api-access-282qj\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.217005 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:45.716962805 +0000 UTC m=+115.217789854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.217026 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-policies\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.217235 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-serving-ca\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.217498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.217561 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-serving-cert\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.218362 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.218396 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.218418 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.232641 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 09:19:45 crc kubenswrapper[5113]: W0121 09:19:45.238071 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf605d44_e53a_4c60_8372_384c58f82b2b.slice/crio-48e366b8abc3d4c14eb94774927effa82def7427e306aac92dca19e105fdfce0 WatchSource:0}: Error finding container 48e366b8abc3d4c14eb94774927effa82def7427e306aac92dca19e105fdfce0: Status 404 returned error can't find the container with id 48e366b8abc3d4c14eb94774927effa82def7427e306aac92dca19e105fdfce0 Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.263153 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.303223 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.319249 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.319450 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msp8l\" (UniqueName: \"kubernetes.io/projected/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-kube-api-access-msp8l\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.319481 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.319498 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kcw7\" (UniqueName: \"kubernetes.io/projected/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-kube-api-access-2kcw7\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.319515 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.320328 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:45.820305462 +0000 UTC m=+115.321132511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-client\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320424 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-dir\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320473 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320563 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-registration-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320622 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5380c95f-900e-457f-b1de-ad687328f6ea-metrics-tls\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320670 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320691 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qks8b\" (UniqueName: \"kubernetes.io/projected/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-kube-api-access-qks8b\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320704 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-dir\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.320766 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xlc\" (UniqueName: \"kubernetes.io/projected/917e721d-67a0-46ba-88cb-9541fba5ebf1-kube-api-access-b2xlc\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321112 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5380c95f-900e-457f-b1de-ad687328f6ea-config-volume\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321180 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321301 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sbgtf\" (UniqueName: \"kubernetes.io/projected/be5020dc-d64c-44bf-b6fa-10f148f9f046-kube-api-access-sbgtf\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321345 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7k6\" (UniqueName: \"kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321368 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrp2\" (UniqueName: \"kubernetes.io/projected/248b0806-1d40-4954-a74e-6282de18ff7b-kube-api-access-pfrp2\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321388 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321442 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.321489 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:45.821478363 +0000 UTC m=+115.322305412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321661 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-mountpoint-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321784 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-plugins-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321942 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-282qj\" (UniqueName: \"kubernetes.io/projected/688053cc-c60a-4778-8b7b-f79c508916fa-kube-api-access-282qj\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.321992 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-profile-collector-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322094 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-config\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-csi-data-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322256 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctsjd\" (UniqueName: \"kubernetes.io/projected/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-kube-api-access-ctsjd\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322289 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-socket-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322320 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxxm\" (UniqueName: \"kubernetes.io/projected/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-kube-api-access-cwxxm\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322338 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3388546-9539-4838-83b7-fd30f5875e2a-serving-cert\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322422 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322466 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws4g\" (UniqueName: \"kubernetes.io/projected/a50abcac-1e9b-44cb-8ee6-9509fe77429a-kube-api-access-xws4g\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5380c95f-900e-457f-b1de-ad687328f6ea-tmp-dir\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322514 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbrs\" (UniqueName: \"kubernetes.io/projected/5380c95f-900e-457f-b1de-ad687328f6ea-kube-api-access-hdbrs\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322549 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322570 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-apiservice-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322587 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw64g\" (UniqueName: \"kubernetes.io/projected/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-kube-api-access-dw64g\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322608 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322623 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-images\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322642 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkbg6\" (UniqueName: \"kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322672 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322699 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-encryption-config\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322719 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322757 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-policies\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322794 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-tmpfs\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322839 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322870 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkwlf\" (UniqueName: \"kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322888 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/248b0806-1d40-4954-a74e-6282de18ff7b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322933 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.322951 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323119 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-webhook-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323168 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3388546-9539-4838-83b7-fd30f5875e2a-config\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323186 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxfr\" (UniqueName: \"kubernetes.io/projected/a3388546-9539-4838-83b7-fd30f5875e2a-kube-api-access-2nxfr\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323203 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323248 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmxq\" (UniqueName: \"kubernetes.io/projected/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-kube-api-access-klmxq\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323726 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323763 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-audit-policies\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323798 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wctf\" (UniqueName: \"kubernetes.io/projected/7759c9b2-9727-4831-9610-fb40bdd99328-kube-api-access-6wctf\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323825 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323866 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.323935 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zhtn\" (UniqueName: \"kubernetes.io/projected/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-kube-api-access-2zhtn\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324049 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztqpt\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324085 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a50abcac-1e9b-44cb-8ee6-9509fe77429a-tmpfs\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324148 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-serving-ca\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324200 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-trusted-ca-bundle\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324290 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-serving-cert\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324321 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-webhook-certs\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324402 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324576 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-serving-ca\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324640 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-srv-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.324696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/688053cc-c60a-4778-8b7b-f79c508916fa-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.325376 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.325406 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-certs\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.325447 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.325478 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7759c9b2-9727-4831-9610-fb40bdd99328-tmpfs\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.325496 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.326090 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.326122 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-cert\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.326139 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-srv-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.326373 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.326889 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-serving-cert\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.327170 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.327186 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.327203 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.327794 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-key\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.327832 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.328780 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.328811 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-node-bootstrap-token\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.328891 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.328921 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-cabundle\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.329389 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.329624 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-etcd-client\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.330445 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/688053cc-c60a-4778-8b7b-f79c508916fa-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.331654 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.333193 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-serving-cert\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.333455 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be5020dc-d64c-44bf-b6fa-10f148f9f046-encryption-config\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.342408 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.379064 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbgtf\" (UniqueName: \"kubernetes.io/projected/be5020dc-d64c-44bf-b6fa-10f148f9f046-kube-api-access-sbgtf\") pod \"apiserver-8596bd845d-nrprx\" (UID: \"be5020dc-d64c-44bf-b6fa-10f148f9f046\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.400401 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-282qj\" (UniqueName: \"kubernetes.io/projected/688053cc-c60a-4778-8b7b-f79c508916fa-kube-api-access-282qj\") pod \"package-server-manager-77f986bd66-pltp7\" (UID: \"688053cc-c60a-4778-8b7b-f79c508916fa\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.415279 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431687 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431820 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-apiservice-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431842 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dw64g\" (UniqueName: \"kubernetes.io/projected/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-kube-api-access-dw64g\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431863 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-images\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431878 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lkbg6\" (UniqueName: \"kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431895 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431915 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431932 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-tmpfs\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431950 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431966 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkwlf\" (UniqueName: \"kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.431982 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/248b0806-1d40-4954-a74e-6282de18ff7b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432003 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432033 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-webhook-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432051 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3388546-9539-4838-83b7-fd30f5875e2a-config\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432067 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxfr\" (UniqueName: \"kubernetes.io/projected/a3388546-9539-4838-83b7-fd30f5875e2a-kube-api-access-2nxfr\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432081 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432097 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klmxq\" (UniqueName: \"kubernetes.io/projected/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-kube-api-access-klmxq\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432154 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6wctf\" (UniqueName: \"kubernetes.io/projected/7759c9b2-9727-4831-9610-fb40bdd99328-kube-api-access-6wctf\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432171 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432190 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432206 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2zhtn\" (UniqueName: \"kubernetes.io/projected/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-kube-api-access-2zhtn\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432221 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a50abcac-1e9b-44cb-8ee6-9509fe77429a-tmpfs\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432240 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-serving-cert\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432256 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-webhook-certs\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432274 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-srv-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432308 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432323 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-certs\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432339 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432355 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7759c9b2-9727-4831-9610-fb40bdd99328-tmpfs\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432372 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432395 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-cert\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432410 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-srv-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432430 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432445 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432462 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-key\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432476 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432498 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432512 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-node-bootstrap-token\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432536 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432551 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-cabundle\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432571 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-msp8l\" (UniqueName: \"kubernetes.io/projected/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-kube-api-access-msp8l\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432588 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2kcw7\" (UniqueName: \"kubernetes.io/projected/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-kube-api-access-2kcw7\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432618 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432636 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-registration-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432657 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5380c95f-900e-457f-b1de-ad687328f6ea-metrics-tls\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432675 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qks8b\" (UniqueName: \"kubernetes.io/projected/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-kube-api-access-qks8b\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432694 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xlc\" (UniqueName: \"kubernetes.io/projected/917e721d-67a0-46ba-88cb-9541fba5ebf1-kube-api-access-b2xlc\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432721 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5380c95f-900e-457f-b1de-ad687328f6ea-config-volume\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432756 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432779 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7k6\" (UniqueName: \"kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432796 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrp2\" (UniqueName: \"kubernetes.io/projected/248b0806-1d40-4954-a74e-6282de18ff7b-kube-api-access-pfrp2\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432812 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432826 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432841 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-mountpoint-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432856 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-plugins-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432877 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-profile-collector-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432892 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-config\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432908 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-csi-data-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432925 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctsjd\" (UniqueName: \"kubernetes.io/projected/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-kube-api-access-ctsjd\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432940 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-socket-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432963 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxxm\" (UniqueName: \"kubernetes.io/projected/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-kube-api-access-cwxxm\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.432978 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3388546-9539-4838-83b7-fd30f5875e2a-serving-cert\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.433001 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.433020 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xws4g\" (UniqueName: \"kubernetes.io/projected/a50abcac-1e9b-44cb-8ee6-9509fe77429a-kube-api-access-xws4g\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.433035 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5380c95f-900e-457f-b1de-ad687328f6ea-tmp-dir\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.433051 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hdbrs\" (UniqueName: \"kubernetes.io/projected/5380c95f-900e-457f-b1de-ad687328f6ea-kube-api-access-hdbrs\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.433254 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:45.933240286 +0000 UTC m=+115.434067335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.436193 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.437260 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-images\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.437919 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.438380 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-tmpfs\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.438608 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439035 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztqpt\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439119 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-config\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439184 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439245 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-csi-data-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439466 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-config\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439498 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-socket-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.439529 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-tmp-dir\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.440217 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-mountpoint-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.440690 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5380c95f-900e-457f-b1de-ad687328f6ea-config-volume\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.440763 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-registration-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.440834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-plugins-dir\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.440899 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.441962 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.445426 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-cabundle\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.445811 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5380c95f-900e-457f-b1de-ad687328f6ea-tmp-dir\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.445996 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3388546-9539-4838-83b7-fd30f5875e2a-config\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.446323 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7759c9b2-9727-4831-9610-fb40bdd99328-tmpfs\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.446572 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.447412 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.446932 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3388546-9539-4838-83b7-fd30f5875e2a-serving-cert\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.455161 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.457781 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.457992 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.458781 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a50abcac-1e9b-44cb-8ee6-9509fe77429a-tmpfs\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.459278 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.469360 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.470673 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-certs\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.470830 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-profile-collector-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.471265 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a50abcac-1e9b-44cb-8ee6-9509fe77429a-srv-cert\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.471557 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.473524 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-webhook-certs\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.474950 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-apiservice-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.475241 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5380c95f-900e-457f-b1de-ad687328f6ea-metrics-tls\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.475307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.477553 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36956: no serving certificate available for the kubelet" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.478534 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.485196 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.487051 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-node-bootstrap-token\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.493344 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/248b0806-1d40-4954-a74e-6282de18ff7b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.493846 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/917e721d-67a0-46ba-88cb-9541fba5ebf1-signing-key\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.499648 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klmxq\" (UniqueName: \"kubernetes.io/projected/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-kube-api-access-klmxq\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.499958 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-cert\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.501863 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.502463 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7759c9b2-9727-4831-9610-fb40bdd99328-webhook-cert\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.505873 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-serving-cert\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.506195 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.506296 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw64g\" (UniqueName: \"kubernetes.io/projected/39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c-kube-api-access-dw64g\") pod \"machine-config-operator-67c9d58cbb-mssdn\" (UID: \"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.506335 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f5b98e7-087d-4bbf-aca2-2fe25b4f9956-srv-cert\") pod \"olm-operator-5cdf44d969-p2z4m\" (UID: \"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.509720 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdbrs\" (UniqueName: \"kubernetes.io/projected/5380c95f-900e-457f-b1de-ad687328f6ea-kube-api-access-hdbrs\") pod \"dns-default-q9t2j\" (UID: \"5380c95f-900e-457f-b1de-ad687328f6ea\") " pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.535312 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.535586 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.035571376 +0000 UTC m=+115.536398425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.537834 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.540477 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-8v2mh"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.545798 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkbg6\" (UniqueName: \"kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6\") pod \"marketplace-operator-547dbd544d-6zxjl\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.551567 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-msp8l\" (UniqueName: \"kubernetes.io/projected/78a63e50-2573-4ab1-bdc4-ff5a86a33f47-kube-api-access-msp8l\") pod \"csi-hostpathplugin-xdz8l\" (UID: \"78a63e50-2573-4ab1-bdc4-ff5a86a33f47\") " pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.577870 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxxm\" (UniqueName: \"kubernetes.io/projected/d762a2fd-4dde-4547-8eb4-7bffbf4dee94-kube-api-access-cwxxm\") pod \"kube-storage-version-migrator-operator-565b79b866-6jtml\" (UID: \"d762a2fd-4dde-4547-8eb4-7bffbf4dee94\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.588673 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-2jmh4"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.594750 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkwlf\" (UniqueName: \"kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf\") pod \"cni-sysctl-allowlist-ds-648hm\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.595393 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.606894 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.612108 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctsjd\" (UniqueName: \"kubernetes.io/projected/ca8d2ce1-3e0c-44f8-9327-4935a2691c4d-kube-api-access-ctsjd\") pod \"multus-admission-controller-69db94689b-8gwvh\" (UID: \"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d\") " pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.617448 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.622228 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.625445 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-qgt8d"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.633807 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kcw7\" (UniqueName: \"kubernetes.io/projected/d92d168e-5ab8-45e4-bfac-a7d9ecce89fb-kube-api-access-2kcw7\") pod \"ingress-canary-7lgzg\" (UID: \"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb\") " pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.637803 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.638304 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.138268996 +0000 UTC m=+115.639096045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.656447 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xlc\" (UniqueName: \"kubernetes.io/projected/917e721d-67a0-46ba-88cb-9541fba5ebf1-kube-api-access-b2xlc\") pod \"service-ca-74545575db-pj5xv\" (UID: \"917e721d-67a0-46ba-88cb-9541fba5ebf1\") " pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.663358 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-b46m8"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.663431 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-vxj85"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.670213 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-pj5xv" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.687830 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.695708 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.715954 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7k6\" (UniqueName: \"kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6\") pod \"collect-profiles-29483115-gpmgl\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.716334 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-6cwdn"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.716394 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.716412 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qks8b\" (UniqueName: \"kubernetes.io/projected/70164c7a-a7bf-47b3-8cb1-cfe98ab4449d-kube-api-access-qks8b\") pod \"machine-config-server-f9nd8\" (UID: \"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d\") " pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.717081 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.718382 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-f9nd8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.720895 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" event={"ID":"818c11fe-b682-4b2c-9f47-dee838219e31","Type":"ContainerStarted","Data":"abf0312e12aeacdac98c033211942d9d98c1ffc75fca18f4897f779087270db2"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.720944 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" event={"ID":"818c11fe-b682-4b2c-9f47-dee838219e31","Type":"ContainerStarted","Data":"3d493b7287e01a808a14e13468dc5a4fd0d38a4b4c2e7500e3f85295dfe2f456"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.722993 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" event={"ID":"af605d44-e53a-4c60-8372-384c58f82b2b","Type":"ContainerStarted","Data":"48e366b8abc3d4c14eb94774927effa82def7427e306aac92dca19e105fdfce0"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.723494 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrp2\" (UniqueName: \"kubernetes.io/projected/248b0806-1d40-4954-a74e-6282de18ff7b-kube-api-access-pfrp2\") pod \"control-plane-machine-set-operator-75ffdb6fcd-q9rxd\" (UID: \"248b0806-1d40-4954-a74e-6282de18ff7b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.726804 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-7lgzg" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.735205 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.736060 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.737666 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" event={"ID":"cc029c59-845c-4021-be17-fe92d61a361f","Type":"ContainerStarted","Data":"a8ffc25a6cbf437b9132b17ec2ab8f114d147b5ac32a965c8ff86e5ccbfbdc1b"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.737694 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" event={"ID":"cc029c59-845c-4021-be17-fe92d61a361f","Type":"ContainerStarted","Data":"9435cffa3ade78570b968d16a25d16e9c3455dab6ae79e82b6b44b9f35eb40fe"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.739877 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.740322 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.240309038 +0000 UTC m=+115.741136087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.747834 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e7dfcff-8023-43d7-9cdc-c0609d97fd55-kube-api-access\") pod \"kube-apiserver-operator-575994946d-jlkp8\" (UID: \"1e7dfcff-8023-43d7-9cdc-c0609d97fd55\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.748071 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.749927 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" event={"ID":"493bf246-7422-45bb-a74f-34f4314445de","Type":"ContainerStarted","Data":"68f560315bbfc8ded93cfc4191f0346aa242290647c9bba7c46814fcfd517dad"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.765856 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws4g\" (UniqueName: \"kubernetes.io/projected/a50abcac-1e9b-44cb-8ee6-9509fe77429a-kube-api-access-xws4g\") pod \"catalog-operator-75ff9f647d-zdngj\" (UID: \"a50abcac-1e9b-44cb-8ee6-9509fe77429a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.768559 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" event={"ID":"d9041fd4-ea1a-453e-b9c6-efe382434cc0","Type":"ContainerStarted","Data":"e80c9c60acd4498e15b412bd56f0358bfaa9ababbd56cdcf7f506233de19166d"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.768604 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" event={"ID":"d9041fd4-ea1a-453e-b9c6-efe382434cc0","Type":"ContainerStarted","Data":"c7fcba8869bf74bfeb5ee808938ac0eb907457ceeed876d59400f9e773e6406a"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.776046 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxfr\" (UniqueName: \"kubernetes.io/projected/a3388546-9539-4838-83b7-fd30f5875e2a-kube-api-access-2nxfr\") pod \"service-ca-operator-5b9c976747-6r26r\" (UID: \"a3388546-9539-4838-83b7-fd30f5875e2a\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.776186 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" event={"ID":"579c5f31-2382-48db-8f59-60a7ed0827ed","Type":"ContainerStarted","Data":"72a7a0b19cbcd0e91caaf5618ad80f1a2a08780dc145bf58a95949c3b2b95891"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.777770 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" event={"ID":"579c5f31-2382-48db-8f59-60a7ed0827ed","Type":"ContainerStarted","Data":"05f6c9eca8f7e96818eab8e413fc30407384ca9e54c5255444c801baf0d572f0"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.777796 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.778195 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" event={"ID":"65764ab8-1117-4c6d-9af3-b8665ebeac26","Type":"ContainerStarted","Data":"9ca4c179330701fa5965ab6ca3c1e066419716feae34c40fc05e29f4af4a01ce"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.778620 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-wcvvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.778691 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.797842 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zhtn\" (UniqueName: \"kubernetes.io/projected/bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e-kube-api-access-2zhtn\") pod \"machine-config-controller-f9cdd68f7-rrtzq\" (UID: \"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.813308 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wctf\" (UniqueName: \"kubernetes.io/projected/7759c9b2-9727-4831-9610-fb40bdd99328-kube-api-access-6wctf\") pod \"packageserver-7d4fc7d867-g2wdf\" (UID: \"7759c9b2-9727-4831-9610-fb40bdd99328\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.837996 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" event={"ID":"92e0cb66-e547-44e5-a384-bd522e554577","Type":"ContainerStarted","Data":"e8077c5b3cb0cb6d2b968a95b990e63eae37abfa40f0bc64770f3b57b3c8a2ea"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.840823 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.841910 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.341893408 +0000 UTC m=+115.842720457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: W0121 09:19:45.852098 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a07741_f83d_4dd4_b52c_1e55f3629eb1.slice/crio-b08fc947c99cbfb633a13cc3c4385bf6c19e5808b8f44cb6a8ca78620c7c3490 WatchSource:0}: Error finding container b08fc947c99cbfb633a13cc3c4385bf6c19e5808b8f44cb6a8ca78620c7c3490: Status 404 returned error can't find the container with id b08fc947c99cbfb633a13cc3c4385bf6c19e5808b8f44cb6a8ca78620c7c3490 Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.852827 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e056cd26-aa6b-40b8-8c4b-cc8bef760ea6-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nqfz9\" (UID: \"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.853181 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" event={"ID":"8483d143-98a0-4b97-af59-bf98eceb47cd","Type":"ContainerStarted","Data":"6dd2ccfe8ac1efba8f933ec427611ebf64a596f41b4dfea7858ef0890911253f"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.859120 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" event={"ID":"6e718db5-bd36-400d-8121-5afc39eb6777","Type":"ContainerStarted","Data":"719c21f8c77617a0b5d6f4f9ccc97822ee11c3aa1bc7eccd3a2e6dcce5336c70"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.863770 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qgt8d" event={"ID":"34bfafc0-b014-412d-8524-50aeb30d19ae","Type":"ContainerStarted","Data":"79bbc449a1d39f21a19409755b2ddd403f6d737aae4498f036e1b709bcfe3b13"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.872939 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" event={"ID":"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a","Type":"ContainerStarted","Data":"8b29172194d24bfcc22237bb4c9956ed87261503c3184dde2c37541b4067fb58"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.879652 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.887007 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.888478 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-nrprx"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.895814 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" event={"ID":"6f837af6-e4b6-4cc8-a869-125d0646e747","Type":"ContainerStarted","Data":"021cdb4caf24ccf7105b981a9bea59439e6e09df748810d5cc11ec5f3367465f"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.895866 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" event={"ID":"6f837af6-e4b6-4cc8-a869-125d0646e747","Type":"ContainerStarted","Data":"42ee90bb13bd8182b0156de26978d853caebd601932a8cae67388677294713c2"} Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.933679 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.935887 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.944152 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.945596 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:45 crc kubenswrapper[5113]: E0121 09:19:45.947509 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.447483465 +0000 UTC m=+115.948310514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.949024 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7"] Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.954069 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.962347 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.978953 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:45 crc kubenswrapper[5113]: I0121 09:19:45.994952 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.046380 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.047366 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.048470 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.548454808 +0000 UTC m=+116.049281847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.074646 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.096026 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" podStartSLOduration=95.096011042 podStartE2EDuration="1m35.096011042s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:46.093296669 +0000 UTC m=+115.594123718" watchObservedRunningTime="2026-01-21 09:19:46.096011042 +0000 UTC m=+115.596838091" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.152205 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.153059 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.653044509 +0000 UTC m=+116.153871558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.202049 5113 ???:1] "http: TLS handshake error from 192.168.126.11:36964: no serving certificate available for the kubelet" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.202607 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-qbp2v" podStartSLOduration=95.202588415 podStartE2EDuration="1m35.202588415s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:46.153552412 +0000 UTC m=+115.654379451" watchObservedRunningTime="2026-01-21 09:19:46.202588415 +0000 UTC m=+115.703415454" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.217859 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.236360 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:46 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:46 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:46 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.236414 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.242448 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.260660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.261294 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.761277667 +0000 UTC m=+116.262104716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.312882 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podStartSLOduration=95.312863818 podStartE2EDuration="1m35.312863818s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:46.311394919 +0000 UTC m=+115.812221968" watchObservedRunningTime="2026-01-21 09:19:46.312863818 +0000 UTC m=+115.813690877" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.362340 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.362690 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.862678102 +0000 UTC m=+116.363505151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.396438 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-78rvb" podStartSLOduration=95.396420615 podStartE2EDuration="1m35.396420615s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:46.390034484 +0000 UTC m=+115.890861593" watchObservedRunningTime="2026-01-21 09:19:46.396420615 +0000 UTC m=+115.897247664" Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.478978 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.479883 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:46.97986458 +0000 UTC m=+116.480691629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.510602 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.555176 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-pj5xv"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.577847 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-7lgzg"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.581246 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.581552 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.081538722 +0000 UTC m=+116.582365771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.611083 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q9t2j"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.611120 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdz8l"] Jan 21 09:19:46 crc kubenswrapper[5113]: W0121 09:19:46.640599 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7eeb36_f834_4af3_8f38_f15bda8f1adb.slice/crio-f45ba885429dc8dfa103ca6c47f75ed51210b1283e28f11b52dce713f78d10ea WatchSource:0}: Error finding container f45ba885429dc8dfa103ca6c47f75ed51210b1283e28f11b52dce713f78d10ea: Status 404 returned error can't find the container with id f45ba885429dc8dfa103ca6c47f75ed51210b1283e28f11b52dce713f78d10ea Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.667531 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" podStartSLOduration=95.667512134 podStartE2EDuration="1m35.667512134s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:46.664871113 +0000 UTC m=+116.165698162" watchObservedRunningTime="2026-01-21 09:19:46.667512134 +0000 UTC m=+116.168339183" Jan 21 09:19:46 crc kubenswrapper[5113]: W0121 09:19:46.675482 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39eadf96_95ee_4c52_a0bd_2aee3c3a8d7c.slice/crio-dc081a71f3fe22de6fd51134dd60dd1674a9f7558a8e8d34716fc65c0ce2c5ff WatchSource:0}: Error finding container dc081a71f3fe22de6fd51134dd60dd1674a9f7558a8e8d34716fc65c0ce2c5ff: Status 404 returned error can't find the container with id dc081a71f3fe22de6fd51134dd60dd1674a9f7558a8e8d34716fc65c0ce2c5ff Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.682170 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.683606 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.183581074 +0000 UTC m=+116.684408123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: W0121 09:19:46.698446 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd92d168e_5ab8_45e4_bfac_a7d9ecce89fb.slice/crio-4bcfca0d13fa3db4d9145d43a2332952348823b096a8c4c2ee0a1bbc83496d59 WatchSource:0}: Error finding container 4bcfca0d13fa3db4d9145d43a2332952348823b096a8c4c2ee0a1bbc83496d59: Status 404 returned error can't find the container with id 4bcfca0d13fa3db4d9145d43a2332952348823b096a8c4c2ee0a1bbc83496d59 Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.786970 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.787303 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.287288311 +0000 UTC m=+116.788115360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.887843 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.888199 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.388181652 +0000 UTC m=+116.889008701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.911562 5113 generic.go:358] "Generic (PLEG): container finished" podID="493bf246-7422-45bb-a74f-34f4314445de" containerID="baed0a00794200bc31ab97d0e97f4edd2700130d71920e558b579e6b9140d883" exitCode=0 Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.911894 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" event={"ID":"493bf246-7422-45bb-a74f-34f4314445de","Type":"ContainerDied","Data":"baed0a00794200bc31ab97d0e97f4edd2700130d71920e558b579e6b9140d883"} Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.923479 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-8gwvh"] Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.964203 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" event={"ID":"78a63e50-2573-4ab1-bdc4-ff5a86a33f47","Type":"ContainerStarted","Data":"37a0a6dc9d2727966ccdd93cc7d10d1251d9ae0b7fe7b69700148518c98211e0"} Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.973433 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" event={"ID":"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c","Type":"ContainerStarted","Data":"dc081a71f3fe22de6fd51134dd60dd1674a9f7558a8e8d34716fc65c0ce2c5ff"} Jan 21 09:19:46 crc kubenswrapper[5113]: I0121 09:19:46.989054 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:46 crc kubenswrapper[5113]: E0121 09:19:46.989400 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.489388232 +0000 UTC m=+116.990215281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.037904 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" event={"ID":"fb37dc5c-fa1d-4f5c-a331-e4855245f95f","Type":"ContainerStarted","Data":"144517f6b2e9b3034dbe12bf10c88ab52a8796971e24e43ffdd91f7ebed05d73"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.085260 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" event={"ID":"d9041fd4-ea1a-453e-b9c6-efe382434cc0","Type":"ContainerStarted","Data":"27272548af1e46f47bf8d633c0c4427de2c58ec4ce5fdd0277994c68d315f16d"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.101451 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.102347 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.602325036 +0000 UTC m=+117.103152085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.105648 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-pj5xv" event={"ID":"917e721d-67a0-46ba-88cb-9541fba5ebf1","Type":"ContainerStarted","Data":"5be5b84f1b9ecf415f949c95a84421a1ed39df2f7557321f19a93857e1dfd2f3"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.108020 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-b46m8" event={"ID":"b85968dd-28ce-49c6-b8bf-ac62d18452b4","Type":"ContainerStarted","Data":"070ce7489448f3a55a7fc3e733f2baedbc672f3bf94cd10fd18379f0eb412efe"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.115391 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-6cwdn" event={"ID":"e4a07741-f83d-4dd4-b52c-1e55f3629eb1","Type":"ContainerStarted","Data":"b08fc947c99cbfb633a13cc3c4385bf6c19e5808b8f44cb6a8ca78620c7c3490"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.122575 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.140680 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-6cwdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.141005 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6cwdn" podUID="e4a07741-f83d-4dd4-b52c-1e55f3629eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.141517 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" event={"ID":"47f4bfde-8b00-4a3c-b405-f928eda4dc04","Type":"ContainerStarted","Data":"e03a4a106e849a40077caae0cd89619a1a861d6aa5dd58bf5e9cb60b8dba790b"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.180516 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" event={"ID":"be5020dc-d64c-44bf-b6fa-10f148f9f046","Type":"ContainerStarted","Data":"ef9f39ecbc79da020ce9b315f44f03b20b1cf726e38f648ebed808ec3e77c1c0"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.214870 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" event={"ID":"f0907a3e-c31f-491e-ac86-e289dd5d426a","Type":"ContainerStarted","Data":"1b357142a70e7a05408a133ac2843df175fb0036d39c36cbf03ed09ac8ff2ed6"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.215943 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.217256 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.717242503 +0000 UTC m=+117.218069562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.217553 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" event={"ID":"92e0cb66-e547-44e5-a384-bd522e554577","Type":"ContainerStarted","Data":"4c8fe81d86c85b10457f62f2c42c30e546dd24f124e2fd302fc9e010638ac1d6"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.218013 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.234304 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:47 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:47 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:47 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.234354 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.248906 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.272092 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" event={"ID":"8483d143-98a0-4b97-af59-bf98eceb47cd","Type":"ContainerStarted","Data":"21a994f315c1898f9bc8d00ab82ce2d920f317abc47d507fa579958f983eee99"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.285409 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.312265 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.321854 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.323287 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.823265392 +0000 UTC m=+117.324092441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.336971 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" event={"ID":"6e718db5-bd36-400d-8121-5afc39eb6777","Type":"ContainerStarted","Data":"14744777a28b12dd0a31737ddc2346ea4ad9fd5db402abc8d5fdba488294c229"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.359338 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-qgt8d" event={"ID":"34bfafc0-b014-412d-8524-50aeb30d19ae","Type":"ContainerStarted","Data":"acb84612ca0cdc123eaf145a5d0dc9d5127b96c437f5a0c3ead45ab63fc413cb"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.393781 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" event={"ID":"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a","Type":"ContainerStarted","Data":"057dbbc92aec7f09461311a4dd12502e4d8a44ce228b4491106e5fff972905a2"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.406467 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" event={"ID":"6f837af6-e4b6-4cc8-a869-125d0646e747","Type":"ContainerStarted","Data":"0c984a53c505615d2390f6caf406b4bff734ff5ec79e0d8dece60ae742bf254f"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.408716 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f9nd8" event={"ID":"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d","Type":"ContainerStarted","Data":"e91cc34f7784880579c2e9a2f6a2ece02450208f18e906159549a09e4c49fcef"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.419527 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.423499 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.423858 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:47.923842205 +0000 UTC m=+117.424669254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.452531 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9t2j" event={"ID":"5380c95f-900e-457f-b1de-ad687328f6ea","Type":"ContainerStarted","Data":"ee564495a686e34c22677fb4359d713f86a4e8f18f358896e8581ffe9555e6e6"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.466442 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-hcv8d" podStartSLOduration=97.466425705 podStartE2EDuration="1m37.466425705s" podCreationTimestamp="2026-01-21 09:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.463394124 +0000 UTC m=+116.964221173" watchObservedRunningTime="2026-01-21 09:19:47.466425705 +0000 UTC m=+116.967252754" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.480088 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7lgzg" event={"ID":"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb","Type":"ContainerStarted","Data":"4bcfca0d13fa3db4d9145d43a2332952348823b096a8c4c2ee0a1bbc83496d59"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.486317 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerStarted","Data":"afba4f48699b3420fa490240b69f82a71647a96392477e0d64fe790e41db8220"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.491498 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.509046 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.512589 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" event={"ID":"d762a2fd-4dde-4547-8eb4-7bffbf4dee94","Type":"ContainerStarted","Data":"fbb42d5e8824f610fb9dbbd9c2ce803943b44f742e1186061f3264da54330c1b"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.515870 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" podStartSLOduration=96.515857959 podStartE2EDuration="1m36.515857959s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.513803664 +0000 UTC m=+117.014630803" watchObservedRunningTime="2026-01-21 09:19:47.515857959 +0000 UTC m=+117.016685008" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.519211 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" event={"ID":"688053cc-c60a-4778-8b7b-f79c508916fa","Type":"ContainerStarted","Data":"fc730c215d40d325065bdd96e83e34954d0fd819ccc5e3e0aaca99a9c77413cb"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.524448 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.526048 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.026028541 +0000 UTC m=+117.526855590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.531176 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" event={"ID":"af605d44-e53a-4c60-8372-384c58f82b2b","Type":"ContainerStarted","Data":"32f340eb376f7db9e60634111d076915d59800a4b031f2ebb5ded237bd7a8735"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.551819 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.562631 5113 ???:1] "http: TLS handshake error from 192.168.126.11:37460: no serving certificate available for the kubelet" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.585450 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj"] Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.587428 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" event={"ID":"6a7eeb36-f834-4af3-8f38-f15bda8f1adb","Type":"ContainerStarted","Data":"f45ba885429dc8dfa103ca6c47f75ed51210b1283e28f11b52dce713f78d10ea"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.594572 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-6cwdn" podStartSLOduration=96.594553826 podStartE2EDuration="1m36.594553826s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.587801505 +0000 UTC m=+117.088628554" watchObservedRunningTime="2026-01-21 09:19:47.594553826 +0000 UTC m=+117.095380875" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.619824 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" event={"ID":"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956","Type":"ContainerStarted","Data":"023c64b14e33c7e48443abee1d40b29d9cf773a3a5a45c45f902351743a86795"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.631812 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.632215 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.132197664 +0000 UTC m=+117.633024713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.648753 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" event={"ID":"98edd4b9-a5db-4e54-b89a-07ea8d30ea38","Type":"ContainerStarted","Data":"df8b4496ff15ba20fed94f0b40847e1309a6554488dec17e1616a48d08f09575"} Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.660691 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.664427 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" podStartSLOduration=96.664413056 podStartE2EDuration="1m36.664413056s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.663183133 +0000 UTC m=+117.164010182" watchObservedRunningTime="2026-01-21 09:19:47.664413056 +0000 UTC m=+117.165240095" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.671128 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-qtl4r" podStartSLOduration=96.671108976 podStartE2EDuration="1m36.671108976s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.633099358 +0000 UTC m=+117.133926417" watchObservedRunningTime="2026-01-21 09:19:47.671108976 +0000 UTC m=+117.171936025" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.708261 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hr5w7" podStartSLOduration=96.708242379 podStartE2EDuration="1m36.708242379s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.702527646 +0000 UTC m=+117.203354695" watchObservedRunningTime="2026-01-21 09:19:47.708242379 +0000 UTC m=+117.209069428" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.738557 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.739039 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.239023873 +0000 UTC m=+117.739850922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.739294 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.745717 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.245703702 +0000 UTC m=+117.746530751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.809266 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" podStartSLOduration=96.809241933 podStartE2EDuration="1m36.809241933s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.805893454 +0000 UTC m=+117.306720503" watchObservedRunningTime="2026-01-21 09:19:47.809241933 +0000 UTC m=+117.310068982" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.809719 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-qgt8d" podStartSLOduration=96.809714316 podStartE2EDuration="1m36.809714316s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.759635725 +0000 UTC m=+117.260462774" watchObservedRunningTime="2026-01-21 09:19:47.809714316 +0000 UTC m=+117.310541365" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.837538 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-ggq6c" podStartSLOduration=96.83751921 podStartE2EDuration="1m36.83751921s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:47.83338761 +0000 UTC m=+117.334214659" watchObservedRunningTime="2026-01-21 09:19:47.83751921 +0000 UTC m=+117.338346259" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.842373 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.842689 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.342673298 +0000 UTC m=+117.843500347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.893111 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:19:47 crc kubenswrapper[5113]: I0121 09:19:47.948374 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:47 crc kubenswrapper[5113]: E0121 09:19:47.949146 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.449129559 +0000 UTC m=+117.949956608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.050167 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.050532 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.550518323 +0000 UTC m=+118.051345372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.151455 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.151985 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.65197418 +0000 UTC m=+118.152801229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.220500 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:48 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:48 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:48 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.220572 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.252796 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.253259 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.753239141 +0000 UTC m=+118.254066190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.354902 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.355246 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.855234012 +0000 UTC m=+118.356061061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.456236 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.456749 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:48.9567206 +0000 UTC m=+118.457547649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.479179 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.479230 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.506742 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.557619 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.561124 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.061108395 +0000 UTC m=+118.561935444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.664338 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.683263 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.183240625 +0000 UTC m=+118.684067674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.688491 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.689048 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.18903396 +0000 UTC m=+118.689861009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.722578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-pj5xv" event={"ID":"917e721d-67a0-46ba-88cb-9541fba5ebf1","Type":"ContainerStarted","Data":"481fb825d608e6c355624f0df41dd37f464ba5c6550426a3a9fe1fe6636c53c1"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.731355 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" event={"ID":"65764ab8-1117-4c6d-9af3-b8665ebeac26","Type":"ContainerStarted","Data":"b3aad9c31ae6d592921d42c8150ec62e265a47affc0b020894e09cc33536a88e"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.733944 5113 generic.go:358] "Generic (PLEG): container finished" podID="be5020dc-d64c-44bf-b6fa-10f148f9f046" containerID="bb185771c10b1130a341e4e15bd39754413594ee16a7fcbd1b104d215db942ac" exitCode=0 Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.734080 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" event={"ID":"be5020dc-d64c-44bf-b6fa-10f148f9f046","Type":"ContainerDied","Data":"bb185771c10b1130a341e4e15bd39754413594ee16a7fcbd1b104d215db942ac"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.762023 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" event={"ID":"9f5b98e7-087d-4bbf-aca2-2fe25b4f9956","Type":"ContainerStarted","Data":"27090ed473926838b0cea25f86902034cdb5e510b76f6b974a744d5411a1bae1"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.762400 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.781551 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-f9nd8" event={"ID":"70164c7a-a7bf-47b3-8cb1-cfe98ab4449d","Type":"ContainerStarted","Data":"570a535e0dba9917cbf320756d411d1ec62822f926cece6ce2690075f033f461"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.789433 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.790044 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.290028014 +0000 UTC m=+118.790855063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.809922 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-pj5xv" podStartSLOduration=97.809906446 podStartE2EDuration="1m37.809906446s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:48.785158124 +0000 UTC m=+118.285985223" watchObservedRunningTime="2026-01-21 09:19:48.809906446 +0000 UTC m=+118.310733495" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.822374 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" event={"ID":"688053cc-c60a-4778-8b7b-f79c508916fa","Type":"ContainerStarted","Data":"3e74a37ee0b64c30f6e8fdbc8d40a66deaf3141d22b97dee22739f1b6b66bf1a"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.822420 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" event={"ID":"688053cc-c60a-4778-8b7b-f79c508916fa","Type":"ContainerStarted","Data":"02f3b957eb24cf1a40979dd32aa4bdee8e34173acb7cd120fc15cf8b74b2423b"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.823008 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.841417 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.862156 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-f9nd8" podStartSLOduration=6.862132245 podStartE2EDuration="6.862132245s" podCreationTimestamp="2026-01-21 09:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:48.861633171 +0000 UTC m=+118.362460220" watchObservedRunningTime="2026-01-21 09:19:48.862132245 +0000 UTC m=+118.362959294" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.874796 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-7lgzg" event={"ID":"d92d168e-5ab8-45e4-bfac-a7d9ecce89fb","Type":"ContainerStarted","Data":"a66143153b2e267940b077c0b3fc63c8596ba6d6fa6901d77b04b92674fd864b"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.888933 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" event={"ID":"1e7dfcff-8023-43d7-9cdc-c0609d97fd55","Type":"ContainerStarted","Data":"ad9afe9e98020a54c5eda8f1cba5cc76fb8bae691b80c3ddf00ce13780d0fd00"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.897678 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-p2z4m" podStartSLOduration=97.897662476 podStartE2EDuration="1m37.897662476s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:48.896213937 +0000 UTC m=+118.397040986" watchObservedRunningTime="2026-01-21 09:19:48.897662476 +0000 UTC m=+118.398489525" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.899519 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:48 crc kubenswrapper[5113]: E0121 09:19:48.901167 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.40115491 +0000 UTC m=+118.901981949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.925546 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" event={"ID":"493bf246-7422-45bb-a74f-34f4314445de","Type":"ContainerStarted","Data":"ab4218a9ff728862a07b7c10ff5403a9ea0218783d2729bed20ca7db6cf1cfee"} Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.926803 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-7lgzg" podStartSLOduration=6.926787866 podStartE2EDuration="6.926787866s" podCreationTimestamp="2026-01-21 09:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:48.926045406 +0000 UTC m=+118.426872445" watchObservedRunningTime="2026-01-21 09:19:48.926787866 +0000 UTC m=+118.427614915" Jan 21 09:19:48 crc kubenswrapper[5113]: I0121 09:19:48.982588 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" event={"ID":"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e","Type":"ContainerStarted","Data":"4058e46900a8f4f07bb726a8c7218009b6e15804c9e61f2cae01ab182fa2627f"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.001493 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.003330 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.503309355 +0000 UTC m=+119.004136404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.023415 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-b46m8" event={"ID":"b85968dd-28ce-49c6-b8bf-ac62d18452b4","Type":"ContainerStarted","Data":"cc90761f025e135c039c22fde3706f9666e3139497f43dcdfb948bfeb7a636aa"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.024199 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.041005 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-6cwdn" event={"ID":"e4a07741-f83d-4dd4-b52c-1e55f3629eb1","Type":"ContainerStarted","Data":"3155e9d1cef273e6392b063fddec962e9421b375bf498810f60a901292a74463"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.042260 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-6cwdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.042296 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6cwdn" podUID="e4a07741-f83d-4dd4-b52c-1e55f3629eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.046192 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" podStartSLOduration=98.046180503 podStartE2EDuration="1m38.046180503s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.022127798 +0000 UTC m=+118.522954837" watchObservedRunningTime="2026-01-21 09:19:49.046180503 +0000 UTC m=+118.547007552" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.066018 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" podStartSLOduration=98.065989463 podStartE2EDuration="1m38.065989463s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.045779322 +0000 UTC m=+118.546606371" watchObservedRunningTime="2026-01-21 09:19:49.065989463 +0000 UTC m=+118.566816512" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.100684 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" event={"ID":"47f4bfde-8b00-4a3c-b405-f928eda4dc04","Type":"ContainerStarted","Data":"3f85257850f12597da566771f5a35985d138d32e3dca13124f5a83c0cc5f09f5"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.106381 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.108309 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.608293706 +0000 UTC m=+119.109120765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.112831 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-b46m8" podStartSLOduration=98.112811977 podStartE2EDuration="1m38.112811977s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.094230679 +0000 UTC m=+118.595057728" watchObservedRunningTime="2026-01-21 09:19:49.112811977 +0000 UTC m=+118.613639026" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.145005 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" event={"ID":"f0907a3e-c31f-491e-ac86-e289dd5d426a","Type":"ContainerStarted","Data":"c8ab31f3f30c5e653ff5810fe156fb9f0488906b30084ddf88046d20163ac5d5"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.148979 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-4nhkn" podStartSLOduration=98.148955784 podStartE2EDuration="1m38.148955784s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.138421872 +0000 UTC m=+118.639248921" watchObservedRunningTime="2026-01-21 09:19:49.148955784 +0000 UTC m=+118.649782833" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.157076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" event={"ID":"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6","Type":"ContainerStarted","Data":"8f4b1083c41f596a60a8c7ef9ffece761fbf37ed06039ec4226c25a7350840cf"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.179515 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" podStartSLOduration=98.179497942 podStartE2EDuration="1m38.179497942s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.179398699 +0000 UTC m=+118.680225748" watchObservedRunningTime="2026-01-21 09:19:49.179497942 +0000 UTC m=+118.680324991" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.191308 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" event={"ID":"248b0806-1d40-4954-a74e-6282de18ff7b","Type":"ContainerStarted","Data":"8e2c48012a8db8cba0469d2d3af830db5cf47f399eea7026187b5ea1f7b577be"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.207233 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.209094 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.709072584 +0000 UTC m=+119.209899633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.221693 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" event={"ID":"a50abcac-1e9b-44cb-8ee6-9509fe77429a","Type":"ContainerStarted","Data":"375215e931f1c9472a5fabb5b6a921ffbf43319b7a495c683c224fa1e70a33bc"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.221768 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" event={"ID":"a50abcac-1e9b-44cb-8ee6-9509fe77429a","Type":"ContainerStarted","Data":"df82925d401abfbe469559b265aa832dae0b132f4ec151384a8167f3229f78d1"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.222197 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.223157 5113 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-zdngj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.223210 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" podUID="a50abcac-1e9b-44cb-8ee6-9509fe77429a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.228991 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:49 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:49 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:49 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.229036 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.237213 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" event={"ID":"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a","Type":"ContainerStarted","Data":"a58d38491fbab70e7368aae66a771c77b34dfbd75ab401e8d3a7dc9af0d3d0a1"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.245514 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" podStartSLOduration=98.245502799 podStartE2EDuration="1m38.245502799s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.220309445 +0000 UTC m=+118.721136514" watchObservedRunningTime="2026-01-21 09:19:49.245502799 +0000 UTC m=+118.746329848" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.247379 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" podStartSLOduration=98.247373159 podStartE2EDuration="1m38.247373159s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.245109049 +0000 UTC m=+118.745936098" watchObservedRunningTime="2026-01-21 09:19:49.247373159 +0000 UTC m=+118.748200208" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.269426 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" podStartSLOduration=99.269406699 podStartE2EDuration="1m39.269406699s" podCreationTimestamp="2026-01-21 09:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.268289849 +0000 UTC m=+118.769116898" watchObservedRunningTime="2026-01-21 09:19:49.269406699 +0000 UTC m=+118.770233748" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.279552 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" event={"ID":"a3388546-9539-4838-83b7-fd30f5875e2a","Type":"ContainerStarted","Data":"6ebad96ab5f1bcbe31660c605c9250f90e288e0e913ec8e8c3e2cbc69bef49f6"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.279609 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" event={"ID":"a3388546-9539-4838-83b7-fd30f5875e2a","Type":"ContainerStarted","Data":"f8c89343051f922ee20f5f4137cf697b3a7f6d8c21b9f80dc777cc9c215b309d"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.291171 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" event={"ID":"d0b4d330-efd9-4d42-b2a7-bf8c0c126f8a","Type":"ContainerStarted","Data":"c9ebfcd21f5653706202538c2c25f744459bb2534f7982ece847b46c9f7ccd10"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.297141 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" event={"ID":"d762a2fd-4dde-4547-8eb4-7bffbf4dee94","Type":"ContainerStarted","Data":"e42376443520abc09e174cc43d87c497ce4b27756f950a40c0af45ee61d77000"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.300146 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-6r26r" podStartSLOduration=98.300130272 podStartE2EDuration="1m38.300130272s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.299232148 +0000 UTC m=+118.800059197" watchObservedRunningTime="2026-01-21 09:19:49.300130272 +0000 UTC m=+118.800957321" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.310845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.314965 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.814952849 +0000 UTC m=+119.315779898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.323104 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9t2j" event={"ID":"5380c95f-900e-457f-b1de-ad687328f6ea","Type":"ContainerStarted","Data":"84c500da228a2c2ae1aeaae813ed3e633ecdaf6131a6860703e5caeea60a1eea"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.323582 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-f8w74" podStartSLOduration=98.32356632 podStartE2EDuration="1m38.32356632s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.323361894 +0000 UTC m=+118.824188943" watchObservedRunningTime="2026-01-21 09:19:49.32356632 +0000 UTC m=+118.824393369" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.362213 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerStarted","Data":"b6fc78891198b1aae3c811ea72d652b8b660b4df94fde6c26f6ca98f75021677"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.362763 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-6jtml" podStartSLOduration=98.362743489 podStartE2EDuration="1m38.362743489s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.361246928 +0000 UTC m=+118.862073977" watchObservedRunningTime="2026-01-21 09:19:49.362743489 +0000 UTC m=+118.863570538" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.363368 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.372783 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-6zxjl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.372849 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.396332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" event={"ID":"7759c9b2-9727-4831-9610-fb40bdd99328","Type":"ContainerStarted","Data":"3761084a2a82b690db589eba873824961436afe943e0496b68eb67388f338ac1"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.397169 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.415182 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.416614 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:49.916598301 +0000 UTC m=+119.417425350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.447642 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" podStartSLOduration=98.447624421 podStartE2EDuration="1m38.447624421s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.397008586 +0000 UTC m=+118.897835635" watchObservedRunningTime="2026-01-21 09:19:49.447624421 +0000 UTC m=+118.948451470" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.450996 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" event={"ID":"6a7eeb36-f834-4af3-8f38-f15bda8f1adb","Type":"ContainerStarted","Data":"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.451057 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.475165 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-vxj85" event={"ID":"98edd4b9-a5db-4e54-b89a-07ea8d30ea38","Type":"ContainerStarted","Data":"1ee0a39d745e8516ee3c3ade385554e89fd5b82343b902c8375ac2a493b0200f"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.484281 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" podStartSLOduration=98.484266032 podStartE2EDuration="1m38.484266032s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.448599437 +0000 UTC m=+118.949426486" watchObservedRunningTime="2026-01-21 09:19:49.484266032 +0000 UTC m=+118.985093081" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.503235 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" event={"ID":"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d","Type":"ContainerStarted","Data":"46d8fb4f14ca38e88989fc4f77a7cfcf621fe164f655a85f330b4b80a540bf8d"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.512007 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.516544 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.518405 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.018392726 +0000 UTC m=+119.519219775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.547031 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" event={"ID":"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c","Type":"ContainerStarted","Data":"61c7dc42cb196d80a279d900c4b7da875d16ada1c4cd6632f8fb7780e069e4f4"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.553070 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" podStartSLOduration=7.553056044 podStartE2EDuration="7.553056044s" podCreationTimestamp="2026-01-21 09:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.487322054 +0000 UTC m=+118.988149103" watchObservedRunningTime="2026-01-21 09:19:49.553056044 +0000 UTC m=+119.053883093" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.567558 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" event={"ID":"fb37dc5c-fa1d-4f5c-a331-e4855245f95f","Type":"ContainerStarted","Data":"08ab44e36c2b1ec5080086151993d74acd9d37e49e791e57fc0f2e4927c208d0"} Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.584232 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-bnjd9" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.590210 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" podStartSLOduration=98.590200009 podStartE2EDuration="1m38.590200009s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.590076795 +0000 UTC m=+119.090903844" watchObservedRunningTime="2026-01-21 09:19:49.590200009 +0000 UTC m=+119.091027058" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.635661 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.637368 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.137350711 +0000 UTC m=+119.638177760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.707464 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-b46m8" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.738851 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.739419 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.239401374 +0000 UTC m=+119.740228423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.823284 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-q8ftx" podStartSLOduration=98.823265409 podStartE2EDuration="1m38.823265409s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:49.778026578 +0000 UTC m=+119.278853627" watchObservedRunningTime="2026-01-21 09:19:49.823265409 +0000 UTC m=+119.324092458" Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.840199 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.840392 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.340364317 +0000 UTC m=+119.841191356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.840614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.841240 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.34123206 +0000 UTC m=+119.842059099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.942433 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.942658 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.442626505 +0000 UTC m=+119.943453554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:49 crc kubenswrapper[5113]: I0121 09:19:49.943017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:49 crc kubenswrapper[5113]: E0121 09:19:49.943345 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.443331734 +0000 UTC m=+119.944158783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.044208 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.044671 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.544654687 +0000 UTC m=+120.045481736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.133478 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-648hm"] Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.146333 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.146758 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.6467231 +0000 UTC m=+120.147550149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.179802 5113 ???:1] "http: TLS handshake error from 192.168.126.11:37468: no serving certificate available for the kubelet" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.222771 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:50 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:50 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:50 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.222845 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.247537 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.247755 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.747709204 +0000 UTC m=+120.248536263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.248258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.248665 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.748648639 +0000 UTC m=+120.249475778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.349967 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.350133 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.850108616 +0000 UTC m=+120.350935665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.350396 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.350768 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.850761103 +0000 UTC m=+120.351588152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.397258 5113 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-g2wdf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.397357 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" podUID="7759c9b2-9727-4831-9610-fb40bdd99328" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.451680 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.452060 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:50.952043975 +0000 UTC m=+120.452871024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.553090 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.553454 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.05343871 +0000 UTC m=+120.554265749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.572083 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" event={"ID":"1e7dfcff-8023-43d7-9cdc-c0609d97fd55","Type":"ContainerStarted","Data":"754c27d9835ffe7d4602d09eb9fe2a6264317c2ef306200d1485510829d83692"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.573685 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" event={"ID":"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e","Type":"ContainerStarted","Data":"dbf2f710f64e34c7bab809c4fc9354db70fa72e69471cf0e6315580f9c52a191"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.573741 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" event={"ID":"bc2585d6-3963-4f8e-98ad-e9a07ebcaf4e","Type":"ContainerStarted","Data":"7c63213ccc9c5c67a2e2f2673f114c604fc9a27f09e25fd83e1c1fab1529a28d"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.575213 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fdg8l" event={"ID":"f0907a3e-c31f-491e-ac86-e289dd5d426a","Type":"ContainerStarted","Data":"06b287d6985f04dc9879341cc0cce385cf7e0c841eb43cffd5c3b996a5e5a140"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.576244 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" event={"ID":"e056cd26-aa6b-40b8-8c4b-cc8bef760ea6","Type":"ContainerStarted","Data":"088cfe4ca70d4f2e56080fa7e90c2b95022b40c658243f8b79eca72e8e3479fd"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.577435 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-q9rxd" event={"ID":"248b0806-1d40-4954-a74e-6282de18ff7b","Type":"ContainerStarted","Data":"0002c22cf92d48c9e265510fd28a41e59396c59a2bc0d95bca15225490f27d37"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.580396 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" event={"ID":"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a","Type":"ContainerStarted","Data":"fcc0dca8a59f603c746f368a24d8750249582b240afc7377bebaa3bdb1e96cbb"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.581789 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q9t2j" event={"ID":"5380c95f-900e-457f-b1de-ad687328f6ea","Type":"ContainerStarted","Data":"bd70f1e575d03aa42f228b5e663e73fa411d853ef12d50d4ed46109661415ec8"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.582131 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.582925 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" event={"ID":"7759c9b2-9727-4831-9610-fb40bdd99328","Type":"ContainerStarted","Data":"1b375813d09ffe73eda033e47d8f3e23d2780bc834cfda213caef0c884879f6e"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.584666 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" event={"ID":"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d","Type":"ContainerStarted","Data":"42f12ae5af36e3e66c99e3d5df61eca820dd7d37eadb7230cc2bad9909614985"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.584690 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" event={"ID":"ca8d2ce1-3e0c-44f8-9327-4935a2691c4d","Type":"ContainerStarted","Data":"82301f15f8ef419c5e72dc8942ce5945d4b5de0c117a7a6914278cfb7d8b6ad8"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.586124 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mssdn" event={"ID":"39eadf96-95ee-4c52-a0bd-2aee3c3a8d7c","Type":"ContainerStarted","Data":"3e003792aa7607ee162806d0d03b43cd2d4e6159097034a3d56384581da764ac"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.587171 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" event={"ID":"65764ab8-1117-4c6d-9af3-b8665ebeac26","Type":"ContainerStarted","Data":"f785f53b3492598105f724b8792ab882967563bdfaea8b115bfec99dc727ec0c"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.588909 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" event={"ID":"be5020dc-d64c-44bf-b6fa-10f148f9f046","Type":"ContainerStarted","Data":"73e080892f6c98bd402dad945b6e0ac421dc087ac93ab804784361052f3c8255"} Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.590326 5113 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-6zxjl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.590364 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.592159 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-6cwdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.592225 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-6cwdn" podUID="e4a07741-f83d-4dd4-b52c-1e55f3629eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.592952 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.604891 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-zdngj" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.611678 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-jlkp8" podStartSLOduration=99.611662339 podStartE2EDuration="1m39.611662339s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.605147934 +0000 UTC m=+120.105974983" watchObservedRunningTime="2026-01-21 09:19:50.611662339 +0000 UTC m=+120.112489388" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.653816 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.656169 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.15615389 +0000 UTC m=+120.656980939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.733907 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-2jmh4" podStartSLOduration=99.733891251 podStartE2EDuration="1m39.733891251s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.652095181 +0000 UTC m=+120.152922230" watchObservedRunningTime="2026-01-21 09:19:50.733891251 +0000 UTC m=+120.234718300" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.756398 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.756821 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.256804715 +0000 UTC m=+120.757631764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.778109 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" podStartSLOduration=99.778087585 podStartE2EDuration="1m39.778087585s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.747069704 +0000 UTC m=+120.247896753" watchObservedRunningTime="2026-01-21 09:19:50.778087585 +0000 UTC m=+120.278914634" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.779753 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-rrtzq" podStartSLOduration=99.779725519 podStartE2EDuration="1m39.779725519s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.774015616 +0000 UTC m=+120.274842675" watchObservedRunningTime="2026-01-21 09:19:50.779725519 +0000 UTC m=+120.280552578" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.838546 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q9t2j" podStartSLOduration=8.838527883 podStartE2EDuration="8.838527883s" podCreationTimestamp="2026-01-21 09:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.828960127 +0000 UTC m=+120.329787176" watchObservedRunningTime="2026-01-21 09:19:50.838527883 +0000 UTC m=+120.339354932" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.857459 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.858076 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.358060306 +0000 UTC m=+120.858887355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.871006 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nqfz9" podStartSLOduration=99.870986832 podStartE2EDuration="1m39.870986832s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.870132659 +0000 UTC m=+120.370959708" watchObservedRunningTime="2026-01-21 09:19:50.870986832 +0000 UTC m=+120.371813941" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.915926 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-8gwvh" podStartSLOduration=99.915911875 podStartE2EDuration="1m39.915911875s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:50.915108964 +0000 UTC m=+120.415936013" watchObservedRunningTime="2026-01-21 09:19:50.915911875 +0000 UTC m=+120.416738924" Jan 21 09:19:50 crc kubenswrapper[5113]: I0121 09:19:50.960437 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:50 crc kubenswrapper[5113]: E0121 09:19:50.960717 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.460701604 +0000 UTC m=+120.961528643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.011683 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-g2wdf" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.061876 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.062028 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.562000977 +0000 UTC m=+121.062828026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.062386 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.062874 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.56286469 +0000 UTC m=+121.063691739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.164223 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.164546 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.664529472 +0000 UTC m=+121.165356521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.221976 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:51 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:51 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:51 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.222028 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.265615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.265969 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.765956597 +0000 UTC m=+121.266783636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.366553 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.366846 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.866830978 +0000 UTC m=+121.367658027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.468104 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.468567 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:51.968555941 +0000 UTC m=+121.469382990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.524430 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.555869 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.558203 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.590472 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.590684 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.090660041 +0000 UTC m=+121.591487090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.594148 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.604012 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" event={"ID":"78a63e50-2573-4ab1-bdc4-ff5a86a33f47","Type":"ContainerStarted","Data":"b603c925e03587c478feb2d9afbc273d3dec9eca0c9b6fcd3ea323e785773f25"} Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.607496 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" gracePeriod=30 Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.617296 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-8v2mh" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.632194 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.692612 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.692987 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.693106 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.693200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85d9w\" (UniqueName: \"kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.741060 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.745624 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.245598729 +0000 UTC m=+121.746425778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.753276 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.756986 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.763221 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.794040 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.794329 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.794361 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85d9w\" (UniqueName: \"kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.794488 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.294464938 +0000 UTC m=+121.795291987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.794721 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.795301 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.795403 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.860631 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85d9w\" (UniqueName: \"kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w\") pod \"certified-operators-vlng9\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.895839 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.896107 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfxpb\" (UniqueName: \"kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.896257 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.896411 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.896825 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.396812608 +0000 UTC m=+121.897639657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.922196 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.924986 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.933955 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.951053 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.997311 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.997551 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.997686 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.497647648 +0000 UTC m=+121.998474697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.997808 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thbj\" (UniqueName: \"kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.997958 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.998057 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.998092 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gfxpb\" (UniqueName: \"kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.998157 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.998238 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.998699 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: I0121 09:19:51.999017 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:51 crc kubenswrapper[5113]: E0121 09:19:51.999209 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.499201549 +0000 UTC m=+122.000028598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.043429 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfxpb\" (UniqueName: \"kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb\") pod \"community-operators-frj7n\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.090690 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.100322 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.100630 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.100722 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.100765 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2thbj\" (UniqueName: \"kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.101483 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.601461958 +0000 UTC m=+122.102289007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.102096 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.106248 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.119009 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.138038 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.138209 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.141222 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thbj\" (UniqueName: \"kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj\") pod \"certified-operators-clhrp\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.208096 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxv45\" (UniqueName: \"kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.208212 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.208395 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.208459 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.208519 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.708506894 +0000 UTC m=+122.209333943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.239917 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:52 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:52 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:52 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.239976 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.300139 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.311370 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.311513 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.311545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.311605 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxv45\" (UniqueName: \"kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.312265 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.812247492 +0000 UTC m=+122.313074531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.312624 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.312988 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.315469 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.344509 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxv45\" (UniqueName: \"kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45\") pod \"community-operators-7jsb2\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.412634 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.413169 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:52.913150823 +0000 UTC m=+122.413977872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.484425 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.515100 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.515198 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.015181655 +0000 UTC m=+122.516008694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.515300 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.515614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.515934 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.015926755 +0000 UTC m=+122.516753804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: W0121 09:19:52.524645 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aca680f_7aa4_47e3_a52c_08e8e8a39c84.slice/crio-5682e3b85b7a7531724a0da43c00627e2c3c1e098e75ce78cb0acfc7ba10ca36 WatchSource:0}: Error finding container 5682e3b85b7a7531724a0da43c00627e2c3c1e098e75ce78cb0acfc7ba10ca36: Status 404 returned error can't find the container with id 5682e3b85b7a7531724a0da43c00627e2c3c1e098e75ce78cb0acfc7ba10ca36 Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.612221 5113 generic.go:358] "Generic (PLEG): container finished" podID="dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" containerID="fcc0dca8a59f603c746f368a24d8750249582b240afc7377bebaa3bdb1e96cbb" exitCode=0 Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.612329 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" event={"ID":"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a","Type":"ContainerDied","Data":"fcc0dca8a59f603c746f368a24d8750249582b240afc7377bebaa3bdb1e96cbb"} Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.616487 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.616671 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.116655462 +0000 UTC m=+122.617482501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.616920 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.617260 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.117252908 +0000 UTC m=+122.618079957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.617952 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerStarted","Data":"a8aa0cb155f1829bbeb05b6eebfa97a3bf420e5c337be2776d00d72c057b6385"} Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.619117 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerStarted","Data":"5682e3b85b7a7531724a0da43c00627e2c3c1e098e75ce78cb0acfc7ba10ca36"} Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.625390 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.672398 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:19:52 crc kubenswrapper[5113]: W0121 09:19:52.696832 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e8b28cd_10b6_45d2_afe7_7d2fba1c98f0.slice/crio-820d280e84f379faa8f44c92fedb115155f5a4be64018d8156f985d203d3b2a1 WatchSource:0}: Error finding container 820d280e84f379faa8f44c92fedb115155f5a4be64018d8156f985d203d3b2a1: Status 404 returned error can't find the container with id 820d280e84f379faa8f44c92fedb115155f5a4be64018d8156f985d203d3b2a1 Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.718706 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.718879 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.218851189 +0000 UTC m=+122.719678238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.719381 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.719675 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.219668501 +0000 UTC m=+122.720495550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.820128 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.821237 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.32121974 +0000 UTC m=+122.822046789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:52 crc kubenswrapper[5113]: I0121 09:19:52.922696 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:52 crc kubenswrapper[5113]: E0121 09:19:52.923091 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.423075207 +0000 UTC m=+122.923902256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.023607 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.023806 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.523778863 +0000 UTC m=+123.024605912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.024165 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.024441 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.524434771 +0000 UTC m=+123.025261820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.125574 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.125768 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.625724573 +0000 UTC m=+123.126551622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.125915 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.126428 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.626416111 +0000 UTC m=+123.127243160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.220587 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:53 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:53 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:53 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.220638 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.226682 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.226900 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.726871461 +0000 UTC m=+123.227698510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.227335 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.227724 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.727715674 +0000 UTC m=+123.228542723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.328041 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.328225 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.828197284 +0000 UTC m=+123.329024333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.328655 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.329001 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.828988115 +0000 UTC m=+123.329815154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.430116 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.430344 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.930312618 +0000 UTC m=+123.431139677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.430419 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.430872 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:53.930859003 +0000 UTC m=+123.431686062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.531173 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.531530 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.031508338 +0000 UTC m=+123.532335387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.612791 5113 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.633400 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.633718 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.133699794 +0000 UTC m=+123.634526843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.645551 5113 generic.go:358] "Generic (PLEG): container finished" podID="2097e4fe-30fc-4341-90b7-14877224a474" containerID="c45436242979b264329436e527a69d053d45072b5b97a79ab3c727d8fb9a9297" exitCode=0 Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.645601 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerDied","Data":"c45436242979b264329436e527a69d053d45072b5b97a79ab3c727d8fb9a9297"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.645650 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerStarted","Data":"31c56fcaef324debce60b00ef733ee7683c2eec53b3d125a9a2069e126cfbc9d"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.649966 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerID="3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4" exitCode=0 Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.650094 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerDied","Data":"3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.659300 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" event={"ID":"78a63e50-2573-4ab1-bdc4-ff5a86a33f47","Type":"ContainerStarted","Data":"be590086bb863631cf15ced41c0d6950846da1e7171afb53a560e19a5cc94203"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.672624 5113 generic.go:358] "Generic (PLEG): container finished" podID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerID="c3427ad5e58b17f0609fb09970af3806f510884844b707dc6288249767dfa772" exitCode=0 Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.672771 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerDied","Data":"c3427ad5e58b17f0609fb09970af3806f510884844b707dc6288249767dfa772"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.679037 5113 generic.go:358] "Generic (PLEG): container finished" podID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerID="4d6e3b7591eed5fd8fb786d361cf5b89f6a43fd41ca60f124a283b749d26503b" exitCode=0 Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.679632 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerDied","Data":"4d6e3b7591eed5fd8fb786d361cf5b89f6a43fd41ca60f124a283b749d26503b"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.679676 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerStarted","Data":"820d280e84f379faa8f44c92fedb115155f5a4be64018d8156f985d203d3b2a1"} Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.726182 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.735776 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.735861 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.735975 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.736657 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.23663902 +0000 UTC m=+123.737466069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.741375 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.742419 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.242407265 +0000 UTC m=+123.743234314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.744652 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843196 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.843400 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.343379308 +0000 UTC m=+123.844206357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843502 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843530 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843601 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db25r\" (UniqueName: \"kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843658 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.843762 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.844132 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.344114198 +0000 UTC m=+123.844941247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-wt9pn" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.941914 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.944837 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.945005 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.945054 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-db25r\" (UniqueName: \"kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.945090 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.945485 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.945546 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:53 crc kubenswrapper[5113]: E0121 09:19:53.945604 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-21 09:19:54.445590705 +0000 UTC m=+123.946417754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 09:19:53 crc kubenswrapper[5113]: I0121 09:19:53.970370 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-db25r\" (UniqueName: \"kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r\") pod \"redhat-marketplace-b57t5\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.030298 5113 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T09:19:53.613047671Z","UUID":"06efa7b3-da1f-4425-a1bf-16f4ff8da41d","Handler":null,"Name":"","Endpoint":""} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.033176 5113 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.033202 5113 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.049052 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz7k6\" (UniqueName: \"kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6\") pod \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.049155 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume\") pod \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.049312 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume\") pod \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\" (UID: \"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a\") " Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.049511 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.051424 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" (UID: "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.054893 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" (UID: "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.054927 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6" (OuterVolumeSpecName: "kube-api-access-gz7k6") pod "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" (UID: "dcdec0ee-3553-4c15-ad1f-eb6b29eec33a"). InnerVolumeSpecName "kube-api-access-gz7k6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.056246 5113 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.056302 5113 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.092689 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-wt9pn\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.106419 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.113947 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.114445 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" containerName="collect-profiles" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.114462 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" containerName="collect-profiles" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.114562 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" containerName="collect-profiles" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.123463 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.123590 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.139229 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.147139 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.151309 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.151644 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.151974 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.152001 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.152048 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.152058 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gz7k6\" (UniqueName: \"kubernetes.io/projected/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a-kube-api-access-gz7k6\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.152211 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.159792 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.220511 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:54 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:54 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:54 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.220754 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.224200 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.233598 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.253289 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.253351 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.253370 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.253399 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.253429 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98mnj\" (UniqueName: \"kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.352179 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.354656 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.354755 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.354786 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.354830 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.354883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98mnj\" (UniqueName: \"kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.355387 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.355649 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.356013 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: W0121 09:19:54.369531 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf127893b_ca79_46cf_b50d_de1d623cdc3f.slice/crio-448d7bd75d83ba4f9f9c43312fd5e402605001378de59605d91dca149a0ffcc6 WatchSource:0}: Error finding container 448d7bd75d83ba4f9f9c43312fd5e402605001378de59605d91dca149a0ffcc6: Status 404 returned error can't find the container with id 448d7bd75d83ba4f9f9c43312fd5e402605001378de59605d91dca149a0ffcc6 Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.386844 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.392062 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98mnj\" (UniqueName: \"kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj\") pod \"redhat-marketplace-4fxwd\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.479872 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.489933 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.542226 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.708497 5113 generic.go:358] "Generic (PLEG): container finished" podID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerID="4e216f63e60269daacf454363e9711f37bf62aaff82b102b08e726c060407aae" exitCode=0 Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.708935 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerDied","Data":"4e216f63e60269daacf454363e9711f37bf62aaff82b102b08e726c060407aae"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.708967 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerStarted","Data":"448d7bd75d83ba4f9f9c43312fd5e402605001378de59605d91dca149a0ffcc6"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.718930 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.723177 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.731626 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.733896 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.734401 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.734990 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.738588 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.738806 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl" event={"ID":"dcdec0ee-3553-4c15-ad1f-eb6b29eec33a","Type":"ContainerDied","Data":"a58d38491fbab70e7368aae66a771c77b34dfbd75ab401e8d3a7dc9af0d3d0a1"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.738837 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58d38491fbab70e7368aae66a771c77b34dfbd75ab401e8d3a7dc9af0d3d0a1" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.738853 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.757710 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=27.757689299 podStartE2EDuration="27.757689299s" podCreationTimestamp="2026-01-21 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:54.755532101 +0000 UTC m=+124.256359160" watchObservedRunningTime="2026-01-21 09:19:54.757689299 +0000 UTC m=+124.258516348" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.769108 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" event={"ID":"78a63e50-2573-4ab1-bdc4-ff5a86a33f47","Type":"ContainerStarted","Data":"5c93b5bd2989385302bf0968cd124d6a4025d263bc904ae22a5738c9152e8e53"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.769146 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" event={"ID":"78a63e50-2573-4ab1-bdc4-ff5a86a33f47","Type":"ContainerStarted","Data":"87a4a9e4d8fe64be8e9abfbb541604703abd3daec10478c023c06c3acbdfb697"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.770877 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" event={"ID":"fcd945f6-07b1-46f0-9c38-69d04075b569","Type":"ContainerStarted","Data":"b58ef55b2ae3331fc304092b986e44a194dedbc04a09873ddbb6ae37425b456a"} Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.815045 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xdz8l" podStartSLOduration=12.815025094 podStartE2EDuration="12.815025094s" podCreationTimestamp="2026-01-21 09:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:54.810706069 +0000 UTC m=+124.311533118" watchObservedRunningTime="2026-01-21 09:19:54.815025094 +0000 UTC m=+124.315852143" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.824277 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.857541 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.866673 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.866852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djtzm\" (UniqueName: \"kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.867083 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.968633 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.968694 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.968727 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djtzm\" (UniqueName: \"kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.969297 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:54 crc kubenswrapper[5113]: I0121 09:19:54.970184 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.010071 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djtzm\" (UniqueName: \"kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm\") pod \"redhat-operators-pkwr7\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.056290 5113 patch_prober.go:28] interesting pod/downloads-747b44746d-6cwdn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.056352 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-6cwdn" podUID="e4a07741-f83d-4dd4-b52c-1e55f3629eb1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.086093 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.130940 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.147675 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.147711 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.147871 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.195833 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.196475 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.204634 5113 patch_prober.go:28] interesting pod/console-64d44f6ddf-qgt8d container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.204685 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-qgt8d" podUID="34bfafc0-b014-412d-8524-50aeb30d19ae" containerName="console" probeResult="failure" output="Get \"https://10.217.0.29:8443/health\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.217625 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.219988 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:55 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:55 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:55 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.220047 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.279491 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jbf\" (UniqueName: \"kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.280469 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.280536 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.334469 5113 ???:1] "http: TLS handshake error from 192.168.126.11:37484: no serving certificate available for the kubelet" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.382701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.382779 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.382874 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8jbf\" (UniqueName: \"kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.384624 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.385903 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.416784 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8jbf\" (UniqueName: \"kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf\") pod \"redhat-operators-6dthc\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.433583 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.458546 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.459866 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:55 crc kubenswrapper[5113]: W0121 09:19:55.470917 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ad34d8c_f3f3_436e_8054_e0aa221aa622.slice/crio-b657a1eedd82f3cc867ffa6927daf62a55dc71f052793cb9d43313b06e70eedf WatchSource:0}: Error finding container b657a1eedd82f3cc867ffa6927daf62a55dc71f052793cb9d43313b06e70eedf: Status 404 returned error can't find the container with id b657a1eedd82f3cc867ffa6927daf62a55dc71f052793cb9d43313b06e70eedf Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.476932 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.523835 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.773720 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.783950 5113 generic.go:358] "Generic (PLEG): container finished" podID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerID="1d8c4e50c1861c0c3d9d7070cbdddcf67a352d3272390056df2d267ee1f856d0" exitCode=0 Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.784015 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerDied","Data":"1d8c4e50c1861c0c3d9d7070cbdddcf67a352d3272390056df2d267ee1f856d0"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.784040 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerStarted","Data":"b657a1eedd82f3cc867ffa6927daf62a55dc71f052793cb9d43313b06e70eedf"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.789644 5113 generic.go:358] "Generic (PLEG): container finished" podID="f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" containerID="369443e4c1f7a9803cfaba59b7b1fcfb698203bca4d059097932e7e3b2abc659" exitCode=0 Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.790032 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852","Type":"ContainerDied","Data":"369443e4c1f7a9803cfaba59b7b1fcfb698203bca4d059097932e7e3b2abc659"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.790061 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852","Type":"ContainerStarted","Data":"df4a9d26b56c83e329f39775307f0f5691fc51d56743767d739ae8543eb92407"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.794700 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerStarted","Data":"793baadc34d7d403e49dc0575a9ce4d9e1f5607889c96f051c5815552eb1c09d"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.795900 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" event={"ID":"fcd945f6-07b1-46f0-9c38-69d04075b569","Type":"ContainerStarted","Data":"0152125a188ded39defe1f81c241c5738510ef521a6e9a9732404236d8b81def"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.796752 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.798763 5113 generic.go:358] "Generic (PLEG): container finished" podID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerID="e3feb453635b9800fccec423b7e0525174dfa794eb3da354469179a66252cf18" exitCode=0 Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.798978 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerDied","Data":"e3feb453635b9800fccec423b7e0525174dfa794eb3da354469179a66252cf18"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.799022 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerStarted","Data":"af55263fef06c59bdd2571125b138608c15a9f8352c8e79727fd16e6e826d0ed"} Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.806752 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-nrprx" Jan 21 09:19:55 crc kubenswrapper[5113]: I0121 09:19:55.849381 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" podStartSLOduration=104.849364188 podStartE2EDuration="1m44.849364188s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:19:55.846588974 +0000 UTC m=+125.347416043" watchObservedRunningTime="2026-01-21 09:19:55.849364188 +0000 UTC m=+125.350191237" Jan 21 09:19:56 crc kubenswrapper[5113]: I0121 09:19:56.219952 5113 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-tbd7w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 09:19:56 crc kubenswrapper[5113]: [-]has-synced failed: reason withheld Jan 21 09:19:56 crc kubenswrapper[5113]: [+]process-running ok Jan 21 09:19:56 crc kubenswrapper[5113]: healthz check failed Jan 21 09:19:56 crc kubenswrapper[5113]: I0121 09:19:56.220007 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" podUID="6e718db5-bd36-400d-8121-5afc39eb6777" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 09:19:56 crc kubenswrapper[5113]: I0121 09:19:56.807091 5113 generic.go:358] "Generic (PLEG): container finished" podID="4296669d-6ddb-4410-877f-63072f824b28" containerID="4cbeed0d94f87b5a648959ed7a8075866e80a84b95b7b6a615c4fa2eb2618ee2" exitCode=0 Jan 21 09:19:56 crc kubenswrapper[5113]: I0121 09:19:56.807340 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerDied","Data":"4cbeed0d94f87b5a648959ed7a8075866e80a84b95b7b6a615c4fa2eb2618ee2"} Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.017849 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.109297 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access\") pod \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.109468 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir\") pod \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\" (UID: \"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852\") " Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.109600 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" (UID: "f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.110088 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.114909 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" (UID: "f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.211494 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.240269 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.250474 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-tbd7w" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.814015 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.814078 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852","Type":"ContainerDied","Data":"df4a9d26b56c83e329f39775307f0f5691fc51d56743767d739ae8543eb92407"} Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.814114 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df4a9d26b56c83e329f39775307f0f5691fc51d56743767d739ae8543eb92407" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.874559 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.875401 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" containerName="pruner" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.875416 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" containerName="pruner" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.875521 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f1c2a85e-00e5-4766-a4fe-1e2fa2b2a852" containerName="pruner" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.880534 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.882937 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.884133 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:19:57 crc kubenswrapper[5113]: I0121 09:19:57.889324 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.022141 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.022338 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.123284 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.123412 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.123450 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.139117 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.205708 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:19:58 crc kubenswrapper[5113]: I0121 09:19:58.928594 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q9t2j" Jan 21 09:19:59 crc kubenswrapper[5113]: E0121 09:19:59.458176 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:19:59 crc kubenswrapper[5113]: E0121 09:19:59.461805 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:19:59 crc kubenswrapper[5113]: E0121 09:19:59.467335 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:19:59 crc kubenswrapper[5113]: E0121 09:19:59.467393 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.647006 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.647117 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.647149 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.647199 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.649331 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.649948 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.650580 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.659661 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.664096 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.674325 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.678566 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.758596 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.769090 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.781450 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 21 09:19:59 crc kubenswrapper[5113]: I0121 09:19:59.787473 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 21 09:20:00 crc kubenswrapper[5113]: I0121 09:20:00.610840 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-6cwdn" Jan 21 09:20:00 crc kubenswrapper[5113]: I0121 09:20:00.863031 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:20:00 crc kubenswrapper[5113]: I0121 09:20:00.867454 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0d75af50-e19d-4048-b80e-51dae4c3378e-metrics-certs\") pod \"network-metrics-daemon-tcv7n\" (UID: \"0d75af50-e19d-4048-b80e-51dae4c3378e\") " pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:20:01 crc kubenswrapper[5113]: I0121 09:20:01.053016 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 09:20:01 crc kubenswrapper[5113]: I0121 09:20:01.060260 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-tcv7n" Jan 21 09:20:05 crc kubenswrapper[5113]: I0121 09:20:05.201510 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:20:05 crc kubenswrapper[5113]: I0121 09:20:05.211371 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-qgt8d" Jan 21 09:20:05 crc kubenswrapper[5113]: I0121 09:20:05.600480 5113 ???:1] "http: TLS handshake error from 192.168.126.11:34936: no serving certificate available for the kubelet" Jan 21 09:20:05 crc kubenswrapper[5113]: I0121 09:20:05.804838 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:20:06 crc kubenswrapper[5113]: I0121 09:20:06.878852 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:20:06 crc kubenswrapper[5113]: I0121 09:20:06.879877 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerName="controller-manager" containerID="cri-o://5c7ee8834db6fe5e833de7fb434689b809fdfdb3a947d516c14cf77fcd5e9894" gracePeriod=30 Jan 21 09:20:06 crc kubenswrapper[5113]: I0121 09:20:06.907092 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:20:06 crc kubenswrapper[5113]: I0121 09:20:06.907669 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" containerID="cri-o://72a7a0b19cbcd0e91caaf5618ad80f1a2a08780dc145bf58a95949c3b2b95891" gracePeriod=30 Jan 21 09:20:07 crc kubenswrapper[5113]: I0121 09:20:07.648593 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-wcvvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 09:20:07 crc kubenswrapper[5113]: I0121 09:20:07.648664 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 09:20:07 crc kubenswrapper[5113]: I0121 09:20:07.668173 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:20:08 crc kubenswrapper[5113]: I0121 09:20:08.885005 5113 generic.go:358] "Generic (PLEG): container finished" podID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerID="5c7ee8834db6fe5e833de7fb434689b809fdfdb3a947d516c14cf77fcd5e9894" exitCode=0 Jan 21 09:20:08 crc kubenswrapper[5113]: I0121 09:20:08.885058 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" event={"ID":"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f","Type":"ContainerDied","Data":"5c7ee8834db6fe5e833de7fb434689b809fdfdb3a947d516c14cf77fcd5e9894"} Jan 21 09:20:09 crc kubenswrapper[5113]: E0121 09:20:09.453236 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:09 crc kubenswrapper[5113]: E0121 09:20:09.455038 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:09 crc kubenswrapper[5113]: E0121 09:20:09.456764 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:09 crc kubenswrapper[5113]: E0121 09:20:09.456929 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:20:14 crc kubenswrapper[5113]: I0121 09:20:14.928098 5113 generic.go:358] "Generic (PLEG): container finished" podID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerID="72a7a0b19cbcd0e91caaf5618ad80f1a2a08780dc145bf58a95949c3b2b95891" exitCode=0 Jan 21 09:20:14 crc kubenswrapper[5113]: I0121 09:20:14.928161 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" event={"ID":"579c5f31-2382-48db-8f59-60a7ed0827ed","Type":"ContainerDied","Data":"72a7a0b19cbcd0e91caaf5618ad80f1a2a08780dc145bf58a95949c3b2b95891"} Jan 21 09:20:15 crc kubenswrapper[5113]: I0121 09:20:15.690780 5113 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-twjhp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:20:15 crc kubenswrapper[5113]: I0121 09:20:15.690857 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.820444 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.935428 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.935819 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.950963 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" event={"ID":"579c5f31-2382-48db-8f59-60a7ed0827ed","Type":"ContainerDied","Data":"05f6c9eca8f7e96818eab8e413fc30407384ca9e54c5255444c801baf0d572f0"} Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.951007 5113 scope.go:117] "RemoveContainer" containerID="72a7a0b19cbcd0e91caaf5618ad80f1a2a08780dc145bf58a95949c3b2b95891" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.951126 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.970870 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" event={"ID":"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f","Type":"ContainerDied","Data":"3c5e3aaa1ceba4ea49363799472008fcc15faadd0a2301228e7c90227cb0d18b"} Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.970984 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-twjhp" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.982060 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.984228 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.984331 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.984408 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerName="controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.984468 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerName="controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.986114 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.986245 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" containerName="controller-manager" Jan 21 09:20:17 crc kubenswrapper[5113]: I0121 09:20:17.994905 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.000869 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.012465 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.025194 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.032942 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.049802 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert\") pod \"579c5f31-2382-48db-8f59-60a7ed0827ed\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.049864 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfs4r\" (UniqueName: \"kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.049898 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp\") pod \"579c5f31-2382-48db-8f59-60a7ed0827ed\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.049985 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca\") pod \"579c5f31-2382-48db-8f59-60a7ed0827ed\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050040 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050067 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config\") pod \"579c5f31-2382-48db-8f59-60a7ed0827ed\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050110 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050163 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050188 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050237 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca\") pod \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\" (UID: \"b8ec25fb-f982-43d5-90dd-2f369ea2fa7f\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.050267 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj9ph\" (UniqueName: \"kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph\") pod \"579c5f31-2382-48db-8f59-60a7ed0827ed\" (UID: \"579c5f31-2382-48db-8f59-60a7ed0827ed\") " Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.052821 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config" (OuterVolumeSpecName: "config") pod "579c5f31-2382-48db-8f59-60a7ed0827ed" (UID: "579c5f31-2382-48db-8f59-60a7ed0827ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.053190 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.053980 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp" (OuterVolumeSpecName: "tmp") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.054267 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "579c5f31-2382-48db-8f59-60a7ed0827ed" (UID: "579c5f31-2382-48db-8f59-60a7ed0827ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.054404 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp" (OuterVolumeSpecName: "tmp") pod "579c5f31-2382-48db-8f59-60a7ed0827ed" (UID: "579c5f31-2382-48db-8f59-60a7ed0827ed"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.054867 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.055162 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config" (OuterVolumeSpecName: "config") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.064456 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph" (OuterVolumeSpecName: "kube-api-access-gj9ph") pod "579c5f31-2382-48db-8f59-60a7ed0827ed" (UID: "579c5f31-2382-48db-8f59-60a7ed0827ed"). InnerVolumeSpecName "kube-api-access-gj9ph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.064667 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "579c5f31-2382-48db-8f59-60a7ed0827ed" (UID: "579c5f31-2382-48db-8f59-60a7ed0827ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.064927 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.071135 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r" (OuterVolumeSpecName: "kube-api-access-zfs4r") pod "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" (UID: "b8ec25fb-f982-43d5-90dd-2f369ea2fa7f"). InnerVolumeSpecName "kube-api-access-zfs4r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.093448 5113 scope.go:117] "RemoveContainer" containerID="5c7ee8834db6fe5e833de7fb434689b809fdfdb3a947d516c14cf77fcd5e9894" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151528 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151565 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151584 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151607 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151779 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxdg\" (UniqueName: \"kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151802 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151863 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.151954 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152160 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152309 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvh7\" (UniqueName: \"kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152394 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152570 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152612 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152628 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152641 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152652 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152685 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152698 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gj9ph\" (UniqueName: \"kubernetes.io/projected/579c5f31-2382-48db-8f59-60a7ed0827ed-kube-api-access-gj9ph\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152711 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/579c5f31-2382-48db-8f59-60a7ed0827ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152722 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfs4r\" (UniqueName: \"kubernetes.io/projected/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f-kube-api-access-zfs4r\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152761 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/579c5f31-2382-48db-8f59-60a7ed0827ed-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.152776 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/579c5f31-2382-48db-8f59-60a7ed0827ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285533 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285583 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvh7\" (UniqueName: \"kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285608 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285661 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285697 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285800 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxdg\" (UniqueName: \"kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285816 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.285903 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.286649 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.287815 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.288109 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.289102 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.289556 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.290139 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.292362 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.294144 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.294247 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.303665 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxdg\" (UniqueName: \"kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg\") pod \"route-controller-manager-5c7874457b-9k5gd\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.320448 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvh7\" (UniqueName: \"kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7\") pod \"controller-manager-7b4659c5bd-2wllj\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.329295 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.329354 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.330411 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.347199 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.347260 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-twjhp"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.358586 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.461573 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.466876 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-tcv7n"] Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.651845 5113 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-wcvvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.652213 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-wcvvf" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.770074 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:18 crc kubenswrapper[5113]: W0121 09:20:18.814720 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3e2d21_82e0_48ef_842f_ae785ccea3a9.slice/crio-18e8211d91bfa44c3ed9bff0d0ce6cd8cfba887dde45c0a3925eb9cfbeb60c5d WatchSource:0}: Error finding container 18e8211d91bfa44c3ed9bff0d0ce6cd8cfba887dde45c0a3925eb9cfbeb60c5d: Status 404 returned error can't find the container with id 18e8211d91bfa44c3ed9bff0d0ce6cd8cfba887dde45c0a3925eb9cfbeb60c5d Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.853402 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579c5f31-2382-48db-8f59-60a7ed0827ed" path="/var/lib/kubelet/pods/579c5f31-2382-48db-8f59-60a7ed0827ed/volumes" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.854146 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ec25fb-f982-43d5-90dd-2f369ea2fa7f" path="/var/lib/kubelet/pods/b8ec25fb-f982-43d5-90dd-2f369ea2fa7f/volumes" Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.922382 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:18 crc kubenswrapper[5113]: W0121 09:20:18.938402 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e8d5386_d038_46c5_8e21_fed111412a4b.slice/crio-ff64fbab25ef4aa2eafdb76a4807b09a625f8e202a42948d52d1f61be5c502d7 WatchSource:0}: Error finding container ff64fbab25ef4aa2eafdb76a4807b09a625f8e202a42948d52d1f61be5c502d7: Status 404 returned error can't find the container with id ff64fbab25ef4aa2eafdb76a4807b09a625f8e202a42948d52d1f61be5c502d7 Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.984283 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerStarted","Data":"54bbcdcfc69342fbf5c71b7e9f181c1b009dc0e600b063dde1fed74fa64b4bec"} Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.987131 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" event={"ID":"9d3e2d21-82e0-48ef-842f-ae785ccea3a9","Type":"ContainerStarted","Data":"18e8211d91bfa44c3ed9bff0d0ce6cd8cfba887dde45c0a3925eb9cfbeb60c5d"} Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.990482 5113 generic.go:358] "Generic (PLEG): container finished" podID="2097e4fe-30fc-4341-90b7-14877224a474" containerID="1d5a01f414be232877a258ab4dfa74cd9b9537aee36fe9357ad6dec1121afe04" exitCode=0 Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.990538 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerDied","Data":"1d5a01f414be232877a258ab4dfa74cd9b9537aee36fe9357ad6dec1121afe04"} Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.995258 5113 generic.go:358] "Generic (PLEG): container finished" podID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerID="180a45a5e6ed638264837aa8986d9cfff2c660f20ecd30446dd49b0aeef36b41" exitCode=0 Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.995505 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerDied","Data":"180a45a5e6ed638264837aa8986d9cfff2c660f20ecd30446dd49b0aeef36b41"} Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.997278 5113 generic.go:358] "Generic (PLEG): container finished" podID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerID="8c12bb3929383843bddb1d695794aea7c376913499116b2f02878aa17d8d8124" exitCode=0 Jan 21 09:20:18 crc kubenswrapper[5113]: I0121 09:20:18.997490 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerDied","Data":"8c12bb3929383843bddb1d695794aea7c376913499116b2f02878aa17d8d8124"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.005698 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"7f77a8570c42eeaac5129adedb2d34688c9138b03bab3ca10304a940a2f9cc1b"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.005746 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"f064c106db29707cd69f7001afe3f976ce9f36c59f31db17d258557383796aa9"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.009514 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerID="76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c" exitCode=0 Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.009567 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerDied","Data":"76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.012685 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba","Type":"ContainerStarted","Data":"93d54b623beae73e4a100bea78ea33ce373ba35d00da94682b9878e53b2ea334"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.017408 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f33da92e4146b1618415910f7085e23294520a23264fe56daa75b9e931b6f0f7"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.017443 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"759c23fb722720b40335c8d9d0294b45daa4b73484245cb25c4da60ee8defea3"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.022283 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcv7n" event={"ID":"0d75af50-e19d-4048-b80e-51dae4c3378e","Type":"ContainerStarted","Data":"c508ee00be829130de5ff61f1732f0cd3b751926af41afb9afa02c66c06b2fb2"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.034953 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"b33b1edf63028af556cc46f0a9efc04646a5a4b6e24bb846d46e558bd3862e94"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.035009 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"97c792094fc93b4de7d969a9b3474cc23d7448f0cdd0a3fff2eb14fe88923884"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.035617 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.036108 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" event={"ID":"3e8d5386-d038-46c5-8e21-fed111412a4b","Type":"ContainerStarted","Data":"ff64fbab25ef4aa2eafdb76a4807b09a625f8e202a42948d52d1f61be5c502d7"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.080911 5113 generic.go:358] "Generic (PLEG): container finished" podID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerID="9cefbe9377e0f7238f8953e10255c030ba6b72124d88fcc772d330a545234a0e" exitCode=0 Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.081011 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerDied","Data":"9cefbe9377e0f7238f8953e10255c030ba6b72124d88fcc772d330a545234a0e"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.100443 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerStarted","Data":"90e423a59f999aa4fd26273be1f19de3ea89e46cf64b57c7fc0ebce14bce913e"} Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.111930 5113 generic.go:358] "Generic (PLEG): container finished" podID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerID="662080317cfd5da8db73d3b25408b2e712be9e09862c1af5a7dfe40951af6f3b" exitCode=0 Jan 21 09:20:19 crc kubenswrapper[5113]: I0121 09:20:19.111994 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerDied","Data":"662080317cfd5da8db73d3b25408b2e712be9e09862c1af5a7dfe40951af6f3b"} Jan 21 09:20:19 crc kubenswrapper[5113]: E0121 09:20:19.453610 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:19 crc kubenswrapper[5113]: E0121 09:20:19.455847 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:19 crc kubenswrapper[5113]: E0121 09:20:19.458388 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 21 09:20:19 crc kubenswrapper[5113]: E0121 09:20:19.458430 5113 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.135864 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerStarted","Data":"a1b545b4ef756a2a4e15460361651faf9ce3d1c59328292f061b590256135999"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.143546 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerStarted","Data":"e076047b8ae1d88074199f9aec38b44fa0d755743baceed407122b4743f17e31"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.147434 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerStarted","Data":"e07971153d29503a4e6fe23c814d6082a2cfdb3f2993b974bc8be5530b6571e9"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.153989 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerStarted","Data":"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.156255 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-frj7n" podStartSLOduration=4.936463682 podStartE2EDuration="29.156238027s" podCreationTimestamp="2026-01-21 09:19:51 +0000 UTC" firstStartedPulling="2026-01-21 09:19:53.646399914 +0000 UTC m=+123.147226963" lastFinishedPulling="2026-01-21 09:20:17.866174219 +0000 UTC m=+147.367001308" observedRunningTime="2026-01-21 09:20:20.153651686 +0000 UTC m=+149.654478735" watchObservedRunningTime="2026-01-21 09:20:20.156238027 +0000 UTC m=+149.657065086" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.156406 5113 generic.go:358] "Generic (PLEG): container finished" podID="dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" containerID="162d54f35e49e124ba6a1b94d096ad70a244f6a639e450e4a8dc8ed5f72930f2" exitCode=0 Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.156498 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba","Type":"ContainerDied","Data":"162d54f35e49e124ba6a1b94d096ad70a244f6a639e450e4a8dc8ed5f72930f2"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.158400 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcv7n" event={"ID":"0d75af50-e19d-4048-b80e-51dae4c3378e","Type":"ContainerStarted","Data":"7ec2fd30a7abd380bf7fba1ffb90029357467482281bf6db2381f3f386c55e70"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.158424 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-tcv7n" event={"ID":"0d75af50-e19d-4048-b80e-51dae4c3378e","Type":"ContainerStarted","Data":"d01bf8958d561d472b2fb7106bb64a0bf81a1991c8c30bef12a071567c3c7d5e"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.159995 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" event={"ID":"3e8d5386-d038-46c5-8e21-fed111412a4b","Type":"ContainerStarted","Data":"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.160605 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.162481 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerStarted","Data":"428ba9dd346a511bf265107b1cfdbcb274364984449155e2e7682b2a21be7644"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.165565 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.165942 5113 generic.go:358] "Generic (PLEG): container finished" podID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerID="90e423a59f999aa4fd26273be1f19de3ea89e46cf64b57c7fc0ebce14bce913e" exitCode=0 Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.166009 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerDied","Data":"90e423a59f999aa4fd26273be1f19de3ea89e46cf64b57c7fc0ebce14bce913e"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.166023 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerStarted","Data":"4810fa84e93f3966973bb37e473e42788a16e3165715e1bac7a758696134cd74"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.171727 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b57t5" podStartSLOduration=3.8523514629999998 podStartE2EDuration="27.171717443s" podCreationTimestamp="2026-01-21 09:19:53 +0000 UTC" firstStartedPulling="2026-01-21 09:19:54.709790897 +0000 UTC m=+124.210617946" lastFinishedPulling="2026-01-21 09:20:18.029156877 +0000 UTC m=+147.529983926" observedRunningTime="2026-01-21 09:20:20.171204319 +0000 UTC m=+149.672031368" watchObservedRunningTime="2026-01-21 09:20:20.171717443 +0000 UTC m=+149.672544492" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.172330 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerStarted","Data":"ecd200f0636ea3664f0eed2ca99711ea1e0df086dd3dc7234b9e778e007fe7c0"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.176903 5113 generic.go:358] "Generic (PLEG): container finished" podID="4296669d-6ddb-4410-877f-63072f824b28" containerID="54bbcdcfc69342fbf5c71b7e9f181c1b009dc0e600b063dde1fed74fa64b4bec" exitCode=0 Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.177136 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerDied","Data":"54bbcdcfc69342fbf5c71b7e9f181c1b009dc0e600b063dde1fed74fa64b4bec"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.184220 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" event={"ID":"9d3e2d21-82e0-48ef-842f-ae785ccea3a9","Type":"ContainerStarted","Data":"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4"} Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.197510 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4fxwd" podStartSLOduration=4.134042765 podStartE2EDuration="26.197490343s" podCreationTimestamp="2026-01-21 09:19:54 +0000 UTC" firstStartedPulling="2026-01-21 09:19:55.800600423 +0000 UTC m=+125.301427472" lastFinishedPulling="2026-01-21 09:20:17.864047971 +0000 UTC m=+147.364875050" observedRunningTime="2026-01-21 09:20:20.196809404 +0000 UTC m=+149.697636453" watchObservedRunningTime="2026-01-21 09:20:20.197490343 +0000 UTC m=+149.698317392" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.237061 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vlng9" podStartSLOduration=5.037546414 podStartE2EDuration="29.237046112s" podCreationTimestamp="2026-01-21 09:19:51 +0000 UTC" firstStartedPulling="2026-01-21 09:19:53.673484309 +0000 UTC m=+123.174311348" lastFinishedPulling="2026-01-21 09:20:17.872983977 +0000 UTC m=+147.373811046" observedRunningTime="2026-01-21 09:20:20.236112567 +0000 UTC m=+149.736939626" watchObservedRunningTime="2026-01-21 09:20:20.237046112 +0000 UTC m=+149.737873161" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.291957 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.300196 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.301144 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-clhrp" podStartSLOduration=4.923191533 podStartE2EDuration="29.301130747s" podCreationTimestamp="2026-01-21 09:19:51 +0000 UTC" firstStartedPulling="2026-01-21 09:19:53.650693599 +0000 UTC m=+123.151520648" lastFinishedPulling="2026-01-21 09:20:18.028632813 +0000 UTC m=+147.529459862" observedRunningTime="2026-01-21 09:20:20.278631887 +0000 UTC m=+149.779458936" watchObservedRunningTime="2026-01-21 09:20:20.301130747 +0000 UTC m=+149.801957796" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.301803 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-tcv7n" podStartSLOduration=129.301797885 podStartE2EDuration="2m9.301797885s" podCreationTimestamp="2026-01-21 09:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:20.299586344 +0000 UTC m=+149.800413393" watchObservedRunningTime="2026-01-21 09:20:20.301797885 +0000 UTC m=+149.802624934" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.325776 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jsb2" podStartSLOduration=3.977675691 podStartE2EDuration="28.325760495s" podCreationTimestamp="2026-01-21 09:19:52 +0000 UTC" firstStartedPulling="2026-01-21 09:19:53.680294082 +0000 UTC m=+123.181121131" lastFinishedPulling="2026-01-21 09:20:18.028378886 +0000 UTC m=+147.529205935" observedRunningTime="2026-01-21 09:20:20.324631684 +0000 UTC m=+149.825458733" watchObservedRunningTime="2026-01-21 09:20:20.325760495 +0000 UTC m=+149.826587544" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.353582 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkwr7" podStartSLOduration=4.136045849 podStartE2EDuration="26.353565281s" podCreationTimestamp="2026-01-21 09:19:54 +0000 UTC" firstStartedPulling="2026-01-21 09:19:55.788673683 +0000 UTC m=+125.289500732" lastFinishedPulling="2026-01-21 09:20:18.006193115 +0000 UTC m=+147.507020164" observedRunningTime="2026-01-21 09:20:20.350713972 +0000 UTC m=+149.851541021" watchObservedRunningTime="2026-01-21 09:20:20.353565281 +0000 UTC m=+149.854392330" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.373713 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" podStartSLOduration=14.373697045 podStartE2EDuration="14.373697045s" podCreationTimestamp="2026-01-21 09:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:20.371153165 +0000 UTC m=+149.871980214" watchObservedRunningTime="2026-01-21 09:20:20.373697045 +0000 UTC m=+149.874524094" Jan 21 09:20:20 crc kubenswrapper[5113]: I0121 09:20:20.398823 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" podStartSLOduration=14.398780506 podStartE2EDuration="14.398780506s" podCreationTimestamp="2026-01-21 09:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:20.396299578 +0000 UTC m=+149.897126627" watchObservedRunningTime="2026-01-21 09:20:20.398780506 +0000 UTC m=+149.899607555" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.191325 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerStarted","Data":"6fdf96397d8196fea4b7617fb80f0ab26f6bdade6cfd65bcff9d408184a75b56"} Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.442276 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.454552 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6dthc" podStartSLOduration=5.200288375 podStartE2EDuration="26.454534657s" podCreationTimestamp="2026-01-21 09:19:55 +0000 UTC" firstStartedPulling="2026-01-21 09:19:56.808222902 +0000 UTC m=+126.309049951" lastFinishedPulling="2026-01-21 09:20:18.062469184 +0000 UTC m=+147.563296233" observedRunningTime="2026-01-21 09:20:21.216259666 +0000 UTC m=+150.717086715" watchObservedRunningTime="2026-01-21 09:20:21.454534657 +0000 UTC m=+150.955361706" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.533837 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir\") pod \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.533975 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access\") pod \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\" (UID: \"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.533962 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" (UID: "dd3cefe0-c2ae-4044-8dae-67c7bd1379ba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.534227 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.545323 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" (UID: "dd3cefe0-c2ae-4044-8dae-67c7bd1379ba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.611268 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-pltp7" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.635043 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3cefe0-c2ae-4044-8dae-67c7bd1379ba-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:21 crc kubenswrapper[5113]: E0121 09:20:21.684876 5113 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a7eeb36_f834_4af3_8f38_f15bda8f1adb.slice/crio-fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0.scope\": RecentStats: unable to find data in memory cache]" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.763040 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-648hm_6a7eeb36-f834-4af3-8f38-f15bda8f1adb/kube-multus-additional-cni-plugins/0.log" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.763115 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.836493 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready\") pod \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.836863 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist\") pod \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.836998 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready" (OuterVolumeSpecName: "ready") pod "6a7eeb36-f834-4af3-8f38-f15bda8f1adb" (UID: "6a7eeb36-f834-4af3-8f38-f15bda8f1adb"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837108 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir\") pod \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837219 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkwlf\" (UniqueName: \"kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf\") pod \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\" (UID: \"6a7eeb36-f834-4af3-8f38-f15bda8f1adb\") " Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837247 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "6a7eeb36-f834-4af3-8f38-f15bda8f1adb" (UID: "6a7eeb36-f834-4af3-8f38-f15bda8f1adb"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837417 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "6a7eeb36-f834-4af3-8f38-f15bda8f1adb" (UID: "6a7eeb36-f834-4af3-8f38-f15bda8f1adb"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837656 5113 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-ready\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837744 5113 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.837815 5113 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.841950 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf" (OuterVolumeSpecName: "kube-api-access-mkwlf") pod "6a7eeb36-f834-4af3-8f38-f15bda8f1adb" (UID: "6a7eeb36-f834-4af3-8f38-f15bda8f1adb"). InnerVolumeSpecName "kube-api-access-mkwlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.924278 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.924645 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:20:21 crc kubenswrapper[5113]: I0121 09:20:21.939137 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mkwlf\" (UniqueName: \"kubernetes.io/projected/6a7eeb36-f834-4af3-8f38-f15bda8f1adb-kube-api-access-mkwlf\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.014222 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.091185 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.091254 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.198339 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.198349 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"dd3cefe0-c2ae-4044-8dae-67c7bd1379ba","Type":"ContainerDied","Data":"93d54b623beae73e4a100bea78ea33ce373ba35d00da94682b9878e53b2ea334"} Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.199721 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93d54b623beae73e4a100bea78ea33ce373ba35d00da94682b9878e53b2ea334" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200311 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-648hm_6a7eeb36-f834-4af3-8f38-f15bda8f1adb/kube-multus-additional-cni-plugins/0.log" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200349 5113 generic.go:358] "Generic (PLEG): container finished" podID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" exitCode=137 Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200476 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" event={"ID":"6a7eeb36-f834-4af3-8f38-f15bda8f1adb","Type":"ContainerDied","Data":"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0"} Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200614 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-648hm" event={"ID":"6a7eeb36-f834-4af3-8f38-f15bda8f1adb","Type":"ContainerDied","Data":"f45ba885429dc8dfa103ca6c47f75ed51210b1283e28f11b52dce713f78d10ea"} Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.200640 5113 scope.go:117] "RemoveContainer" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.224138 5113 scope.go:117] "RemoveContainer" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" Jan 21 09:20:22 crc kubenswrapper[5113]: E0121 09:20:22.225071 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0\": container with ID starting with fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0 not found: ID does not exist" containerID="fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.225182 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0"} err="failed to get container status \"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0\": rpc error: code = NotFound desc = could not find container \"fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0\": container with ID starting with fcba3beb0653e5b0d464f1c9cbe58bc8c963ce2802d8e6dbd834f40f6ae049b0 not found: ID does not exist" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.241288 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-648hm"] Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.244113 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-648hm"] Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.301403 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.301446 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.352741 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.485184 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.485340 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.530157 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:22 crc kubenswrapper[5113]: I0121 09:20:22.850623 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" path="/var/lib/kubelet/pods/6a7eeb36-f834-4af3-8f38-f15bda8f1adb/volumes" Jan 21 09:20:23 crc kubenswrapper[5113]: I0121 09:20:23.144230 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-frj7n" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="registry-server" probeResult="failure" output=< Jan 21 09:20:23 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Jan 21 09:20:23 crc kubenswrapper[5113]: > Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.107377 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.108353 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.159355 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.249351 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.259055 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.481211 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.481250 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:24 crc kubenswrapper[5113]: I0121 09:20:24.513718 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.086976 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.087268 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.263230 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.414408 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.524943 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:25 crc kubenswrapper[5113]: I0121 09:20:25.524978 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.102511 5113 ???:1] "http: TLS handshake error from 192.168.126.11:46766: no serving certificate available for the kubelet" Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.138125 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkwr7" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="registry-server" probeResult="failure" output=< Jan 21 09:20:26 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Jan 21 09:20:26 crc kubenswrapper[5113]: > Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.224599 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jsb2" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="registry-server" containerID="cri-o://ecd200f0636ea3664f0eed2ca99711ea1e0df086dd3dc7234b9e778e007fe7c0" gracePeriod=2 Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.567396 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6dthc" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="registry-server" probeResult="failure" output=< Jan 21 09:20:26 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Jan 21 09:20:26 crc kubenswrapper[5113]: > Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.857335 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.857644 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" podUID="3e8d5386-d038-46c5-8e21-fed111412a4b" containerName="controller-manager" containerID="cri-o://f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964" gracePeriod=30 Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.870492 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:26 crc kubenswrapper[5113]: I0121 09:20:26.870721 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" podUID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" containerName="route-controller-manager" containerID="cri-o://720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4" gracePeriod=30 Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.233809 5113 generic.go:358] "Generic (PLEG): container finished" podID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerID="ecd200f0636ea3664f0eed2ca99711ea1e0df086dd3dc7234b9e778e007fe7c0" exitCode=0 Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.233848 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerDied","Data":"ecd200f0636ea3664f0eed2ca99711ea1e0df086dd3dc7234b9e778e007fe7c0"} Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.629186 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.712443 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities\") pod \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.712553 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxv45\" (UniqueName: \"kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45\") pod \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.712615 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content\") pod \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\" (UID: \"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.713507 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities" (OuterVolumeSpecName: "utilities") pod "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" (UID: "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.725935 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45" (OuterVolumeSpecName: "kube-api-access-jxv45") pod "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" (UID: "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0"). InnerVolumeSpecName "kube-api-access-jxv45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.777252 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" (UID: "2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.790041 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.793419 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813178 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813231 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813257 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813277 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert\") pod \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813310 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813580 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.813650 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp" (OuterVolumeSpecName: "tmp") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814046 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814111 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814125 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814140 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="extract-content" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814146 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="extract-content" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814155 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="extract-utilities" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814163 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="extract-utilities" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814183 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="registry-server" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814188 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="registry-server" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814198 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" containerName="pruner" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814204 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" containerName="pruner" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814210 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e8d5386-d038-46c5-8e21-fed111412a4b" containerName="controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814216 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e8d5386-d038-46c5-8e21-fed111412a4b" containerName="controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814216 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mvh7\" (UniqueName: \"kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814239 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config" (OuterVolumeSpecName: "config") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814224 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" containerName="route-controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814272 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" containerName="route-controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814305 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxdg\" (UniqueName: \"kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg\") pod \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814370 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca\") pod \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814389 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca\") pod \"3e8d5386-d038-46c5-8e21-fed111412a4b\" (UID: \"3e8d5386-d038-46c5-8e21-fed111412a4b\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814456 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config\") pod \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814462 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" containerName="registry-server" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814482 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a7eeb36-f834-4af3-8f38-f15bda8f1adb" containerName="kube-multus-additional-cni-plugins" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814493 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e8d5386-d038-46c5-8e21-fed111412a4b" containerName="controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814507 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd3cefe0-c2ae-4044-8dae-67c7bd1379ba" containerName="pruner" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814521 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" containerName="route-controller-manager" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814647 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp" (OuterVolumeSpecName: "tmp") pod "9d3e2d21-82e0-48ef-842f-ae785ccea3a9" (UID: "9d3e2d21-82e0-48ef-842f-ae785ccea3a9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814494 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp\") pod \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\" (UID: \"9d3e2d21-82e0-48ef-842f-ae785ccea3a9\") " Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.814999 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca" (OuterVolumeSpecName: "client-ca") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815048 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d3e2d21-82e0-48ef-842f-ae785ccea3a9" (UID: "9d3e2d21-82e0-48ef-842f-ae785ccea3a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815289 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config" (OuterVolumeSpecName: "config") pod "9d3e2d21-82e0-48ef-842f-ae785ccea3a9" (UID: "9d3e2d21-82e0-48ef-842f-ae785ccea3a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815370 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815401 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3e8d5386-d038-46c5-8e21-fed111412a4b-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815412 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815424 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jxv45\" (UniqueName: \"kubernetes.io/projected/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-kube-api-access-jxv45\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815435 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815444 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815452 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815462 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.815470 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3e8d5386-d038-46c5-8e21-fed111412a4b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.818766 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.822419 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg" (OuterVolumeSpecName: "kube-api-access-dqxdg") pod "9d3e2d21-82e0-48ef-842f-ae785ccea3a9" (UID: "9d3e2d21-82e0-48ef-842f-ae785ccea3a9"). InnerVolumeSpecName "kube-api-access-dqxdg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.822803 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d3e2d21-82e0-48ef-842f-ae785ccea3a9" (UID: "9d3e2d21-82e0-48ef-842f-ae785ccea3a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.822931 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7" (OuterVolumeSpecName: "kube-api-access-8mvh7") pod "3e8d5386-d038-46c5-8e21-fed111412a4b" (UID: "3e8d5386-d038-46c5-8e21-fed111412a4b"). InnerVolumeSpecName "kube-api-access-8mvh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.826826 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.827196 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.838180 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.852663 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.852825 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916525 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916578 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs69c\" (UniqueName: \"kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916611 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916779 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916831 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916924 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm4q\" (UniqueName: \"kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916947 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.916975 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917031 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917088 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917149 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8d5386-d038-46c5-8e21-fed111412a4b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917164 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917176 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8mvh7\" (UniqueName: \"kubernetes.io/projected/3e8d5386-d038-46c5-8e21-fed111412a4b-kube-api-access-8mvh7\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917188 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dqxdg\" (UniqueName: \"kubernetes.io/projected/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-kube-api-access-dqxdg\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:27 crc kubenswrapper[5113]: I0121 09:20:27.917202 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3e2d21-82e0-48ef-842f-ae785ccea3a9-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018112 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018149 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018183 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fs69c\" (UniqueName: \"kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018258 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018325 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018397 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018451 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm4q\" (UniqueName: \"kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018480 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018513 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.018545 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019089 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019484 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019569 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019583 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019611 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019836 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.019879 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.024325 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.026140 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.034572 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs69c\" (UniqueName: \"kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c\") pod \"route-controller-manager-7dd8bcbdb-4fhc4\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.034731 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm4q\" (UniqueName: \"kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q\") pod \"controller-manager-6966fb8454-qfh7p\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.156239 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.169423 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.260422 5113 generic.go:358] "Generic (PLEG): container finished" podID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" containerID="720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4" exitCode=0 Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.260573 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" event={"ID":"9d3e2d21-82e0-48ef-842f-ae785ccea3a9","Type":"ContainerDied","Data":"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4"} Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.260598 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" event={"ID":"9d3e2d21-82e0-48ef-842f-ae785ccea3a9","Type":"ContainerDied","Data":"18e8211d91bfa44c3ed9bff0d0ce6cd8cfba887dde45c0a3925eb9cfbeb60c5d"} Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.260617 5113 scope.go:117] "RemoveContainer" containerID="720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.260781 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.266188 5113 generic.go:358] "Generic (PLEG): container finished" podID="3e8d5386-d038-46c5-8e21-fed111412a4b" containerID="f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964" exitCode=0 Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.268032 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.268190 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" event={"ID":"3e8d5386-d038-46c5-8e21-fed111412a4b","Type":"ContainerDied","Data":"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964"} Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.268218 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b4659c5bd-2wllj" event={"ID":"3e8d5386-d038-46c5-8e21-fed111412a4b","Type":"ContainerDied","Data":"ff64fbab25ef4aa2eafdb76a4807b09a625f8e202a42948d52d1f61be5c502d7"} Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.270327 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jsb2" event={"ID":"2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0","Type":"ContainerDied","Data":"820d280e84f379faa8f44c92fedb115155f5a4be64018d8156f985d203d3b2a1"} Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.270424 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jsb2" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.280469 5113 scope.go:117] "RemoveContainer" containerID="720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4" Jan 21 09:20:28 crc kubenswrapper[5113]: E0121 09:20:28.281610 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4\": container with ID starting with 720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4 not found: ID does not exist" containerID="720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.281636 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4"} err="failed to get container status \"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4\": rpc error: code = NotFound desc = could not find container \"720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4\": container with ID starting with 720ec23eaed33f2a80f3a612af03bd158c49d59518fc8cc6c62275c9f667d1a4 not found: ID does not exist" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.281651 5113 scope.go:117] "RemoveContainer" containerID="f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.286091 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.303198 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7874457b-9k5gd"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.314132 5113 scope.go:117] "RemoveContainer" containerID="f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.314201 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:28 crc kubenswrapper[5113]: E0121 09:20:28.314606 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964\": container with ID starting with f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964 not found: ID does not exist" containerID="f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.314646 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964"} err="failed to get container status \"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964\": rpc error: code = NotFound desc = could not find container \"f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964\": container with ID starting with f3422c39fb038549f1142082ec0e669c28fdc1c392f2d3a56cb194fce22b0964 not found: ID does not exist" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.314686 5113 scope.go:117] "RemoveContainer" containerID="ecd200f0636ea3664f0eed2ca99711ea1e0df086dd3dc7234b9e778e007fe7c0" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.318705 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b4659c5bd-2wllj"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.332052 5113 scope.go:117] "RemoveContainer" containerID="662080317cfd5da8db73d3b25408b2e712be9e09862c1af5a7dfe40951af6f3b" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.344288 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.351712 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jsb2"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.370631 5113 scope.go:117] "RemoveContainer" containerID="4d6e3b7591eed5fd8fb786d361cf5b89f6a43fd41ca60f124a283b749d26503b" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.414539 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.452147 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.853707 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0" path="/var/lib/kubelet/pods/2e8b28cd-10b6-45d2-afe7-7d2fba1c98f0/volumes" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.854994 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e8d5386-d038-46c5-8e21-fed111412a4b" path="/var/lib/kubelet/pods/3e8d5386-d038-46c5-8e21-fed111412a4b/volumes" Jan 21 09:20:28 crc kubenswrapper[5113]: I0121 09:20:28.855672 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3e2d21-82e0-48ef-842f-ae785ccea3a9" path="/var/lib/kubelet/pods/9d3e2d21-82e0-48ef-842f-ae785ccea3a9/volumes" Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.215569 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.215862 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4fxwd" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="registry-server" containerID="cri-o://e07971153d29503a4e6fe23c814d6082a2cfdb3f2993b974bc8be5530b6571e9" gracePeriod=2 Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.279101 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" event={"ID":"51134b69-646f-4921-9918-13abf2c16642","Type":"ContainerStarted","Data":"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81"} Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.279136 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" event={"ID":"51134b69-646f-4921-9918-13abf2c16642","Type":"ContainerStarted","Data":"c84f7bdeb62a7d90531fee005e6d6d6fe84837742d267d492f288fa495845c7b"} Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.279418 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.282120 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" event={"ID":"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a","Type":"ContainerStarted","Data":"dc7cc22ef314f47bc94c4704a2d3abec3fb68ee9785510a718b40d6587f27221"} Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.282140 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" event={"ID":"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a","Type":"ContainerStarted","Data":"7a61bf1e834b67999250be722db6e814dcbe16b3f2ad25c2b89b5ad728b11289"} Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.282462 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.284138 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.286839 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:29 crc kubenswrapper[5113]: I0121 09:20:29.295190 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" podStartSLOduration=3.295173714 podStartE2EDuration="3.295173714s" podCreationTimestamp="2026-01-21 09:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:29.294352662 +0000 UTC m=+158.795179711" watchObservedRunningTime="2026-01-21 09:20:29.295173714 +0000 UTC m=+158.796000763" Jan 21 09:20:30 crc kubenswrapper[5113]: I0121 09:20:30.290812 5113 generic.go:358] "Generic (PLEG): container finished" podID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerID="e07971153d29503a4e6fe23c814d6082a2cfdb3f2993b974bc8be5530b6571e9" exitCode=0 Jan 21 09:20:30 crc kubenswrapper[5113]: I0121 09:20:30.290896 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerDied","Data":"e07971153d29503a4e6fe23c814d6082a2cfdb3f2993b974bc8be5530b6571e9"} Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.507218 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.528226 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" podStartSLOduration=5.528208612 podStartE2EDuration="5.528208612s" podCreationTimestamp="2026-01-21 09:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:29.337395067 +0000 UTC m=+158.838222116" watchObservedRunningTime="2026-01-21 09:20:31.528208612 +0000 UTC m=+161.029035671" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.571803 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98mnj\" (UniqueName: \"kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj\") pod \"408a7159-50c4-4253-9f85-7c5b87ebbbba\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.571919 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content\") pod \"408a7159-50c4-4253-9f85-7c5b87ebbbba\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.571995 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities\") pod \"408a7159-50c4-4253-9f85-7c5b87ebbbba\" (UID: \"408a7159-50c4-4253-9f85-7c5b87ebbbba\") " Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.573361 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities" (OuterVolumeSpecName: "utilities") pod "408a7159-50c4-4253-9f85-7c5b87ebbbba" (UID: "408a7159-50c4-4253-9f85-7c5b87ebbbba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.577755 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj" (OuterVolumeSpecName: "kube-api-access-98mnj") pod "408a7159-50c4-4253-9f85-7c5b87ebbbba" (UID: "408a7159-50c4-4253-9f85-7c5b87ebbbba"). InnerVolumeSpecName "kube-api-access-98mnj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.588058 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "408a7159-50c4-4253-9f85-7c5b87ebbbba" (UID: "408a7159-50c4-4253-9f85-7c5b87ebbbba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.673503 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.673929 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/408a7159-50c4-4253-9f85-7c5b87ebbbba-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:31 crc kubenswrapper[5113]: I0121 09:20:31.674023 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-98mnj\" (UniqueName: \"kubernetes.io/projected/408a7159-50c4-4253-9f85-7c5b87ebbbba-kube-api-access-98mnj\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.131915 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.178165 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.320424 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fxwd" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.320882 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fxwd" event={"ID":"408a7159-50c4-4253-9f85-7c5b87ebbbba","Type":"ContainerDied","Data":"af55263fef06c59bdd2571125b138608c15a9f8352c8e79727fd16e6e826d0ed"} Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.320968 5113 scope.go:117] "RemoveContainer" containerID="e07971153d29503a4e6fe23c814d6082a2cfdb3f2993b974bc8be5530b6571e9" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.344508 5113 scope.go:117] "RemoveContainer" containerID="8c12bb3929383843bddb1d695794aea7c376913499116b2f02878aa17d8d8124" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.355452 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.361690 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fxwd"] Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.386886 5113 scope.go:117] "RemoveContainer" containerID="e3feb453635b9800fccec423b7e0525174dfa794eb3da354469179a66252cf18" Jan 21 09:20:32 crc kubenswrapper[5113]: I0121 09:20:32.849578 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" path="/var/lib/kubelet/pods/408a7159-50c4-4253-9f85-7c5b87ebbbba/volumes" Jan 21 09:20:33 crc kubenswrapper[5113]: I0121 09:20:33.255439 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:20:33 crc kubenswrapper[5113]: I0121 09:20:33.257843 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.131395 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.173637 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289433 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289941 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="extract-content" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289957 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="extract-content" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289975 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="registry-server" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289981 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="registry-server" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289988 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="extract-utilities" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.289993 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="extract-utilities" Jan 21 09:20:35 crc kubenswrapper[5113]: I0121 09:20:35.290090 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="408a7159-50c4-4253-9f85-7c5b87ebbbba" containerName="registry-server" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.196182 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.196525 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.196643 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.196659 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.197155 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-clhrp" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="registry-server" containerID="cri-o://fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926" gracePeriod=2 Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.199971 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.200876 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.249565 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.373809 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.374231 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.475329 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.475427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.475759 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.499269 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:36 crc kubenswrapper[5113]: I0121 09:20:36.517928 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.005774 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.188487 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.298649 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities\") pod \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.298728 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thbj\" (UniqueName: \"kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj\") pod \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.298841 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content\") pod \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\" (UID: \"7aca680f-7aa4-47e3-a52c-08e8e8a39c84\") " Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.299801 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities" (OuterVolumeSpecName: "utilities") pod "7aca680f-7aa4-47e3-a52c-08e8e8a39c84" (UID: "7aca680f-7aa4-47e3-a52c-08e8e8a39c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.304352 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj" (OuterVolumeSpecName: "kube-api-access-2thbj") pod "7aca680f-7aa4-47e3-a52c-08e8e8a39c84" (UID: "7aca680f-7aa4-47e3-a52c-08e8e8a39c84"). InnerVolumeSpecName "kube-api-access-2thbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.326856 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7aca680f-7aa4-47e3-a52c-08e8e8a39c84" (UID: "7aca680f-7aa4-47e3-a52c-08e8e8a39c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.353536 5113 generic.go:358] "Generic (PLEG): container finished" podID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerID="fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926" exitCode=0 Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.353680 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clhrp" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.353688 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerDied","Data":"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926"} Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.353751 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clhrp" event={"ID":"7aca680f-7aa4-47e3-a52c-08e8e8a39c84","Type":"ContainerDied","Data":"5682e3b85b7a7531724a0da43c00627e2c3c1e098e75ce78cb0acfc7ba10ca36"} Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.353772 5113 scope.go:117] "RemoveContainer" containerID="fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.354982 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa","Type":"ContainerStarted","Data":"6726c14624a20705e6d0f7e2c62d5e2f83f796d0254287d8e2748ec1ff682598"} Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.369592 5113 scope.go:117] "RemoveContainer" containerID="76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.383237 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.386436 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-clhrp"] Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.393625 5113 scope.go:117] "RemoveContainer" containerID="3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.400465 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.400500 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2thbj\" (UniqueName: \"kubernetes.io/projected/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-kube-api-access-2thbj\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.400511 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7aca680f-7aa4-47e3-a52c-08e8e8a39c84-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.411541 5113 scope.go:117] "RemoveContainer" containerID="fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926" Jan 21 09:20:37 crc kubenswrapper[5113]: E0121 09:20:37.412486 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926\": container with ID starting with fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926 not found: ID does not exist" containerID="fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.412525 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926"} err="failed to get container status \"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926\": rpc error: code = NotFound desc = could not find container \"fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926\": container with ID starting with fac554e87e08f441d6c80e8590a56a5d0e1b1ca471ffea784e9a0b18c3577926 not found: ID does not exist" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.412545 5113 scope.go:117] "RemoveContainer" containerID="76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c" Jan 21 09:20:37 crc kubenswrapper[5113]: E0121 09:20:37.412911 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c\": container with ID starting with 76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c not found: ID does not exist" containerID="76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.412952 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c"} err="failed to get container status \"76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c\": rpc error: code = NotFound desc = could not find container \"76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c\": container with ID starting with 76968e347c0612c47b0c784fb1fe3dc846511d7103e1c1c962f901d56bd7524c not found: ID does not exist" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.412978 5113 scope.go:117] "RemoveContainer" containerID="3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4" Jan 21 09:20:37 crc kubenswrapper[5113]: E0121 09:20:37.413511 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4\": container with ID starting with 3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4 not found: ID does not exist" containerID="3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4" Jan 21 09:20:37 crc kubenswrapper[5113]: I0121 09:20:37.413533 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4"} err="failed to get container status \"3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4\": rpc error: code = NotFound desc = could not find container \"3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4\": container with ID starting with 3d6465db6b99abd8fe55ee944e6858e161373e78259ffd4ad993416cfd600cc4 not found: ID does not exist" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.215246 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.215916 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6dthc" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="registry-server" containerID="cri-o://6fdf96397d8196fea4b7617fb80f0ab26f6bdade6cfd65bcff9d408184a75b56" gracePeriod=2 Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.378880 5113 generic.go:358] "Generic (PLEG): container finished" podID="d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" containerID="16edc981bb4d2a41645565abccb07e4729c5e7933fff8f90d2d405db27ff1437" exitCode=0 Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.379170 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa","Type":"ContainerDied","Data":"16edc981bb4d2a41645565abccb07e4729c5e7933fff8f90d2d405db27ff1437"} Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.382948 5113 generic.go:358] "Generic (PLEG): container finished" podID="4296669d-6ddb-4410-877f-63072f824b28" containerID="6fdf96397d8196fea4b7617fb80f0ab26f6bdade6cfd65bcff9d408184a75b56" exitCode=0 Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.382993 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerDied","Data":"6fdf96397d8196fea4b7617fb80f0ab26f6bdade6cfd65bcff9d408184a75b56"} Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.614961 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.716780 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content\") pod \"4296669d-6ddb-4410-877f-63072f824b28\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.716857 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8jbf\" (UniqueName: \"kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf\") pod \"4296669d-6ddb-4410-877f-63072f824b28\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.716916 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities\") pod \"4296669d-6ddb-4410-877f-63072f824b28\" (UID: \"4296669d-6ddb-4410-877f-63072f824b28\") " Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.718359 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities" (OuterVolumeSpecName: "utilities") pod "4296669d-6ddb-4410-877f-63072f824b28" (UID: "4296669d-6ddb-4410-877f-63072f824b28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.724433 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf" (OuterVolumeSpecName: "kube-api-access-h8jbf") pod "4296669d-6ddb-4410-877f-63072f824b28" (UID: "4296669d-6ddb-4410-877f-63072f824b28"). InnerVolumeSpecName "kube-api-access-h8jbf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.818061 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8jbf\" (UniqueName: \"kubernetes.io/projected/4296669d-6ddb-4410-877f-63072f824b28-kube-api-access-h8jbf\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.818103 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.826847 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4296669d-6ddb-4410-877f-63072f824b28" (UID: "4296669d-6ddb-4410-877f-63072f824b28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.851845 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" path="/var/lib/kubelet/pods/7aca680f-7aa4-47e3-a52c-08e8e8a39c84/volumes" Jan 21 09:20:38 crc kubenswrapper[5113]: I0121 09:20:38.918916 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4296669d-6ddb-4410-877f-63072f824b28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.397472 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dthc" event={"ID":"4296669d-6ddb-4410-877f-63072f824b28","Type":"ContainerDied","Data":"793baadc34d7d403e49dc0575a9ce4d9e1f5607889c96f051c5815552eb1c09d"} Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.397491 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dthc" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.397554 5113 scope.go:117] "RemoveContainer" containerID="6fdf96397d8196fea4b7617fb80f0ab26f6bdade6cfd65bcff9d408184a75b56" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.417033 5113 scope.go:117] "RemoveContainer" containerID="54bbcdcfc69342fbf5c71b7e9f181c1b009dc0e600b063dde1fed74fa64b4bec" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.417789 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.423907 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6dthc"] Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.444944 5113 scope.go:117] "RemoveContainer" containerID="4cbeed0d94f87b5a648959ed7a8075866e80a84b95b7b6a615c4fa2eb2618ee2" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.708878 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.727469 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir\") pod \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.727523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access\") pod \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\" (UID: \"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa\") " Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.727634 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" (UID: "d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.728136 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.732422 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" (UID: "d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:39 crc kubenswrapper[5113]: I0121 09:20:39.829617 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:40 crc kubenswrapper[5113]: I0121 09:20:40.405219 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa","Type":"ContainerDied","Data":"6726c14624a20705e6d0f7e2c62d5e2f83f796d0254287d8e2748ec1ff682598"} Jan 21 09:20:40 crc kubenswrapper[5113]: I0121 09:20:40.405952 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6726c14624a20705e6d0f7e2c62d5e2f83f796d0254287d8e2748ec1ff682598" Jan 21 09:20:40 crc kubenswrapper[5113]: I0121 09:20:40.405286 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 21 09:20:40 crc kubenswrapper[5113]: I0121 09:20:40.849281 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4296669d-6ddb-4410-877f-63072f824b28" path="/var/lib/kubelet/pods/4296669d-6ddb-4410-877f-63072f824b28/volumes" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.077476 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078006 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="extract-utilities" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078022 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="extract-utilities" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078033 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="extract-content" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078039 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="extract-content" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078047 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078053 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078066 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078071 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078094 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="extract-content" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078099 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="extract-content" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078111 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" containerName="pruner" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078117 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" containerName="pruner" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078127 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="extract-utilities" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078133 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="extract-utilities" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078206 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="7aca680f-7aa4-47e3-a52c-08e8e8a39c84" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078218 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d54a2c2c-dab2-4b9d-a0c5-d1bf799170fa" containerName="pruner" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.078227 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4296669d-6ddb-4410-877f-63072f824b28" containerName="registry-server" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.094934 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.095084 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.111562 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.111650 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.148377 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.148502 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.148826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.250121 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.250221 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.250262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.250268 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.250398 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.271143 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access\") pod \"installer-12-crc\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.428388 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:20:41 crc kubenswrapper[5113]: I0121 09:20:41.900183 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 21 09:20:41 crc kubenswrapper[5113]: W0121 09:20:41.919184 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5ebafce1_e146_4504_982d_5d5a30f42c6f.slice/crio-55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8 WatchSource:0}: Error finding container 55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8: Status 404 returned error can't find the container with id 55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8 Jan 21 09:20:42 crc kubenswrapper[5113]: I0121 09:20:42.420472 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5ebafce1-e146-4504-982d-5d5a30f42c6f","Type":"ContainerStarted","Data":"55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8"} Jan 21 09:20:43 crc kubenswrapper[5113]: I0121 09:20:43.429929 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5ebafce1-e146-4504-982d-5d5a30f42c6f","Type":"ContainerStarted","Data":"8fa6b0373442d86b53071e1cc261ea97cac446a064d771e5300a23acd0f25870"} Jan 21 09:20:43 crc kubenswrapper[5113]: I0121 09:20:43.464314 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=2.46429176 podStartE2EDuration="2.46429176s" podCreationTimestamp="2026-01-21 09:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:43.459906239 +0000 UTC m=+172.960733328" watchObservedRunningTime="2026-01-21 09:20:43.46429176 +0000 UTC m=+172.965118849" Jan 21 09:20:46 crc kubenswrapper[5113]: I0121 09:20:46.882686 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:46 crc kubenswrapper[5113]: I0121 09:20:46.883249 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" podUID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" containerName="controller-manager" containerID="cri-o://dc7cc22ef314f47bc94c4704a2d3abec3fb68ee9785510a718b40d6587f27221" gracePeriod=30 Jan 21 09:20:46 crc kubenswrapper[5113]: I0121 09:20:46.896474 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:46 crc kubenswrapper[5113]: I0121 09:20:46.897061 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" podUID="51134b69-646f-4921-9918-13abf2c16642" containerName="route-controller-manager" containerID="cri-o://d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81" gracePeriod=30 Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.373276 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.409209 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.409711 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51134b69-646f-4921-9918-13abf2c16642" containerName="route-controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.409728 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="51134b69-646f-4921-9918-13abf2c16642" containerName="route-controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.409845 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="51134b69-646f-4921-9918-13abf2c16642" containerName="route-controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.413950 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.427581 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.442549 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp\") pod \"51134b69-646f-4921-9918-13abf2c16642\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.442664 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca\") pod \"51134b69-646f-4921-9918-13abf2c16642\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.442727 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs69c\" (UniqueName: \"kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c\") pod \"51134b69-646f-4921-9918-13abf2c16642\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.442872 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert\") pod \"51134b69-646f-4921-9918-13abf2c16642\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.443025 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp" (OuterVolumeSpecName: "tmp") pod "51134b69-646f-4921-9918-13abf2c16642" (UID: "51134b69-646f-4921-9918-13abf2c16642"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.443056 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config\") pod \"51134b69-646f-4921-9918-13abf2c16642\" (UID: \"51134b69-646f-4921-9918-13abf2c16642\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.443339 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca" (OuterVolumeSpecName: "client-ca") pod "51134b69-646f-4921-9918-13abf2c16642" (UID: "51134b69-646f-4921-9918-13abf2c16642"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.443629 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config" (OuterVolumeSpecName: "config") pod "51134b69-646f-4921-9918-13abf2c16642" (UID: "51134b69-646f-4921-9918-13abf2c16642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.444090 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.444109 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51134b69-646f-4921-9918-13abf2c16642-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.444120 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51134b69-646f-4921-9918-13abf2c16642-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.462864 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51134b69-646f-4921-9918-13abf2c16642" (UID: "51134b69-646f-4921-9918-13abf2c16642"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.468180 5113 generic.go:358] "Generic (PLEG): container finished" podID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" containerID="dc7cc22ef314f47bc94c4704a2d3abec3fb68ee9785510a718b40d6587f27221" exitCode=0 Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.468345 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" event={"ID":"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a","Type":"ContainerDied","Data":"dc7cc22ef314f47bc94c4704a2d3abec3fb68ee9785510a718b40d6587f27221"} Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.470986 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c" (OuterVolumeSpecName: "kube-api-access-fs69c") pod "51134b69-646f-4921-9918-13abf2c16642" (UID: "51134b69-646f-4921-9918-13abf2c16642"). InnerVolumeSpecName "kube-api-access-fs69c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.477057 5113 generic.go:358] "Generic (PLEG): container finished" podID="51134b69-646f-4921-9918-13abf2c16642" containerID="d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81" exitCode=0 Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.477171 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" event={"ID":"51134b69-646f-4921-9918-13abf2c16642","Type":"ContainerDied","Data":"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81"} Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.477188 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" event={"ID":"51134b69-646f-4921-9918-13abf2c16642","Type":"ContainerDied","Data":"c84f7bdeb62a7d90531fee005e6d6d6fe84837742d267d492f288fa495845c7b"} Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.477203 5113 scope.go:117] "RemoveContainer" containerID="d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.477331 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.506585 5113 scope.go:117] "RemoveContainer" containerID="d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81" Jan 21 09:20:47 crc kubenswrapper[5113]: E0121 09:20:47.509519 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81\": container with ID starting with d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81 not found: ID does not exist" containerID="d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.509679 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81"} err="failed to get container status \"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81\": rpc error: code = NotFound desc = could not find container \"d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81\": container with ID starting with d33fc399115035ca7412277471f0ef8c3b06bf3b181d15751c60afaa3ef3bd81 not found: ID does not exist" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.521565 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.524663 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd8bcbdb-4fhc4"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.545181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.545412 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.545640 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.545748 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.545864 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrtp\" (UniqueName: \"kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.546024 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51134b69-646f-4921-9918-13abf2c16642-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.546044 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fs69c\" (UniqueName: \"kubernetes.io/projected/51134b69-646f-4921-9918-13abf2c16642-kube-api-access-fs69c\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.646620 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.646675 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.646703 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrtp\" (UniqueName: \"kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.646756 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.646958 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.647125 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.647685 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.647828 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.653103 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.661596 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrtp\" (UniqueName: \"kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp\") pod \"route-controller-manager-5f9f44588-cj8mj\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.676673 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.701334 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.701881 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" containerName="controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.701899 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" containerName="controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.701985 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" containerName="controller-manager" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.709350 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.714895 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.727293 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748597 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748686 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748714 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm4q\" (UniqueName: \"kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748810 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748834 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.748942 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config\") pod \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\" (UID: \"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a\") " Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749022 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p488k\" (UniqueName: \"kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749066 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749154 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749184 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.749200 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.751400 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp" (OuterVolumeSpecName: "tmp") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.751500 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.751420 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.751872 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config" (OuterVolumeSpecName: "config") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.754748 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q" (OuterVolumeSpecName: "kube-api-access-fcm4q") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "kube-api-access-fcm4q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.755126 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" (UID: "ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851277 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851547 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851577 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851595 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851622 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p488k\" (UniqueName: \"kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851656 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851715 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851726 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851749 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851758 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fcm4q\" (UniqueName: \"kubernetes.io/projected/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-kube-api-access-fcm4q\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851766 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.851774 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.852538 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.852826 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.853492 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.858352 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.859132 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:47 crc kubenswrapper[5113]: I0121 09:20:47.875544 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p488k\" (UniqueName: \"kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k\") pod \"controller-manager-c88f6fd8d-96dr8\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.027434 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.150433 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.247078 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:20:48 crc kubenswrapper[5113]: W0121 09:20:48.267987 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04de374b_ab52_4152_9687_3812b901345b.slice/crio-932214f12cbe32648832d1f75a3f68f880f204cb5ebcbc9287827afc7946abe9 WatchSource:0}: Error finding container 932214f12cbe32648832d1f75a3f68f880f204cb5ebcbc9287827afc7946abe9: Status 404 returned error can't find the container with id 932214f12cbe32648832d1f75a3f68f880f204cb5ebcbc9287827afc7946abe9 Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.484448 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.484441 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6966fb8454-qfh7p" event={"ID":"ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a","Type":"ContainerDied","Data":"7a61bf1e834b67999250be722db6e814dcbe16b3f2ad25c2b89b5ad728b11289"} Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.484623 5113 scope.go:117] "RemoveContainer" containerID="dc7cc22ef314f47bc94c4704a2d3abec3fb68ee9785510a718b40d6587f27221" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.487891 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" event={"ID":"371e7aac-b969-41f2-af39-4dfb9ee44bbb","Type":"ContainerStarted","Data":"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5"} Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.487950 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" event={"ID":"371e7aac-b969-41f2-af39-4dfb9ee44bbb","Type":"ContainerStarted","Data":"2c02912183fc42ee8b29d287fabb502b4e09aea4553e4a6731d2100af8d30df5"} Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.488180 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.489921 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" event={"ID":"04de374b-ab52-4152-9687-3812b901345b","Type":"ContainerStarted","Data":"b81c48efe0d22c0feb81108d1bafa755d83e78a3af0bcea6524f6a6d20f3bd9b"} Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.489969 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" event={"ID":"04de374b-ab52-4152-9687-3812b901345b","Type":"ContainerStarted","Data":"932214f12cbe32648832d1f75a3f68f880f204cb5ebcbc9287827afc7946abe9"} Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.490363 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.492873 5113 patch_prober.go:28] interesting pod/controller-manager-c88f6fd8d-96dr8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.492944 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" podUID="04de374b-ab52-4152-9687-3812b901345b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.514029 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" podStartSLOduration=1.514004507 podStartE2EDuration="1.514004507s" podCreationTimestamp="2026-01-21 09:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:48.512187497 +0000 UTC m=+178.013014606" watchObservedRunningTime="2026-01-21 09:20:48.514004507 +0000 UTC m=+178.014831586" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.538773 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" podStartSLOduration=2.538750598 podStartE2EDuration="2.538750598s" podCreationTimestamp="2026-01-21 09:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:20:48.535694084 +0000 UTC m=+178.036521173" watchObservedRunningTime="2026-01-21 09:20:48.538750598 +0000 UTC m=+178.039577657" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.551416 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.553591 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6966fb8454-qfh7p"] Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.614214 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.866631 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51134b69-646f-4921-9918-13abf2c16642" path="/var/lib/kubelet/pods/51134b69-646f-4921-9918-13abf2c16642/volumes" Jan 21 09:20:48 crc kubenswrapper[5113]: I0121 09:20:48.868374 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a" path="/var/lib/kubelet/pods/ed5a9b2e-3738-45f7-9f41-0d6ea793ce5a/volumes" Jan 21 09:20:49 crc kubenswrapper[5113]: I0121 09:20:49.576579 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:20:50 crc kubenswrapper[5113]: I0121 09:20:50.296774 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 21 09:21:06 crc kubenswrapper[5113]: I0121 09:21:06.906714 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:21:06 crc kubenswrapper[5113]: I0121 09:21:06.908129 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" podUID="04de374b-ab52-4152-9687-3812b901345b" containerName="controller-manager" containerID="cri-o://b81c48efe0d22c0feb81108d1bafa755d83e78a3af0bcea6524f6a6d20f3bd9b" gracePeriod=30 Jan 21 09:21:06 crc kubenswrapper[5113]: I0121 09:21:06.921842 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:21:06 crc kubenswrapper[5113]: I0121 09:21:06.922160 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" podUID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" containerName="route-controller-manager" containerID="cri-o://238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5" gracePeriod=30 Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.086542 5113 ???:1] "http: TLS handshake error from 192.168.126.11:45924: no serving certificate available for the kubelet" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.393159 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.428333 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.428943 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" containerName="route-controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.428962 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" containerName="route-controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.429070 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" containerName="route-controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.441363 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.441499 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.481997 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp\") pod \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.482365 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrtp\" (UniqueName: \"kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp\") pod \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.482553 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca\") pod \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.482699 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp" (OuterVolumeSpecName: "tmp") pod "371e7aac-b969-41f2-af39-4dfb9ee44bbb" (UID: "371e7aac-b969-41f2-af39-4dfb9ee44bbb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.482945 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config\") pod \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.483173 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert\") pod \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\" (UID: \"371e7aac-b969-41f2-af39-4dfb9ee44bbb\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.483271 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca" (OuterVolumeSpecName: "client-ca") pod "371e7aac-b969-41f2-af39-4dfb9ee44bbb" (UID: "371e7aac-b969-41f2-af39-4dfb9ee44bbb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.483679 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config" (OuterVolumeSpecName: "config") pod "371e7aac-b969-41f2-af39-4dfb9ee44bbb" (UID: "371e7aac-b969-41f2-af39-4dfb9ee44bbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.484083 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/371e7aac-b969-41f2-af39-4dfb9ee44bbb-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.484224 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.484346 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371e7aac-b969-41f2-af39-4dfb9ee44bbb-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.487750 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "371e7aac-b969-41f2-af39-4dfb9ee44bbb" (UID: "371e7aac-b969-41f2-af39-4dfb9ee44bbb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.487838 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp" (OuterVolumeSpecName: "kube-api-access-6rrtp") pod "371e7aac-b969-41f2-af39-4dfb9ee44bbb" (UID: "371e7aac-b969-41f2-af39-4dfb9ee44bbb"). InnerVolumeSpecName "kube-api-access-6rrtp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.585875 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-config\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.586026 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6362975-2d39-4357-9222-ba2387414081-serving-cert\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.586273 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-client-ca\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.586414 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7tg\" (UniqueName: \"kubernetes.io/projected/f6362975-2d39-4357-9222-ba2387414081-kube-api-access-nz7tg\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.586678 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6362975-2d39-4357-9222-ba2387414081-tmp\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.586971 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rrtp\" (UniqueName: \"kubernetes.io/projected/371e7aac-b969-41f2-af39-4dfb9ee44bbb-kube-api-access-6rrtp\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.587126 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/371e7aac-b969-41f2-af39-4dfb9ee44bbb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.622167 5113 generic.go:358] "Generic (PLEG): container finished" podID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" containerID="238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5" exitCode=0 Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.622210 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" event={"ID":"371e7aac-b969-41f2-af39-4dfb9ee44bbb","Type":"ContainerDied","Data":"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5"} Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.622330 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.622689 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj" event={"ID":"371e7aac-b969-41f2-af39-4dfb9ee44bbb","Type":"ContainerDied","Data":"2c02912183fc42ee8b29d287fabb502b4e09aea4553e4a6731d2100af8d30df5"} Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.622949 5113 scope.go:117] "RemoveContainer" containerID="238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.625317 5113 generic.go:358] "Generic (PLEG): container finished" podID="04de374b-ab52-4152-9687-3812b901345b" containerID="b81c48efe0d22c0feb81108d1bafa755d83e78a3af0bcea6524f6a6d20f3bd9b" exitCode=0 Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.625423 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" event={"ID":"04de374b-ab52-4152-9687-3812b901345b","Type":"ContainerDied","Data":"b81c48efe0d22c0feb81108d1bafa755d83e78a3af0bcea6524f6a6d20f3bd9b"} Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.648564 5113 scope.go:117] "RemoveContainer" containerID="238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5" Jan 21 09:21:07 crc kubenswrapper[5113]: E0121 09:21:07.649039 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5\": container with ID starting with 238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5 not found: ID does not exist" containerID="238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.649085 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5"} err="failed to get container status \"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5\": rpc error: code = NotFound desc = could not find container \"238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5\": container with ID starting with 238a1160d21df74056a3209aa6bed4096f6ab3438a4fe91a8df185a107f633c5 not found: ID does not exist" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.665848 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.671093 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5f9f44588-cj8mj"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.688085 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-client-ca\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.688372 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7tg\" (UniqueName: \"kubernetes.io/projected/f6362975-2d39-4357-9222-ba2387414081-kube-api-access-nz7tg\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.688434 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6362975-2d39-4357-9222-ba2387414081-tmp\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.688530 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-config\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.688592 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6362975-2d39-4357-9222-ba2387414081-serving-cert\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.689235 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f6362975-2d39-4357-9222-ba2387414081-tmp\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.690144 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-client-ca\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.690813 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6362975-2d39-4357-9222-ba2387414081-config\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.692264 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6362975-2d39-4357-9222-ba2387414081-serving-cert\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.705706 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7tg\" (UniqueName: \"kubernetes.io/projected/f6362975-2d39-4357-9222-ba2387414081-kube-api-access-nz7tg\") pod \"route-controller-manager-687f48bfdf-fwqmc\" (UID: \"f6362975-2d39-4357-9222-ba2387414081\") " pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.762932 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.795388 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.822055 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-db999bdc8-bqr4n"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.823305 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="04de374b-ab52-4152-9687-3812b901345b" containerName="controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.824105 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="04de374b-ab52-4152-9687-3812b901345b" containerName="controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.824367 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="04de374b-ab52-4152-9687-3812b901345b" containerName="controller-manager" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.832802 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.837768 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-db999bdc8-bqr4n"] Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.890689 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.890836 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.890860 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.890916 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p488k\" (UniqueName: \"kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.890944 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.891025 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles\") pod \"04de374b-ab52-4152-9687-3812b901345b\" (UID: \"04de374b-ab52-4152-9687-3812b901345b\") " Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.892126 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.892447 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp" (OuterVolumeSpecName: "tmp") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.892941 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config" (OuterVolumeSpecName: "config") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.893437 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca" (OuterVolumeSpecName: "client-ca") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.899749 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.900385 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k" (OuterVolumeSpecName: "kube-api-access-p488k") pod "04de374b-ab52-4152-9687-3812b901345b" (UID: "04de374b-ab52-4152-9687-3812b901345b"). InnerVolumeSpecName "kube-api-access-p488k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.992582 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-config\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.993791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpv6k\" (UniqueName: \"kubernetes.io/projected/8676753a-d0b3-4ace-bd2d-96e00bd08db2-kube-api-access-rpv6k\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.993915 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676753a-d0b3-4ace-bd2d-96e00bd08db2-serving-cert\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994012 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-proxy-ca-bundles\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994104 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8676753a-d0b3-4ace-bd2d-96e00bd08db2-tmp\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994222 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-client-ca\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994334 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/04de374b-ab52-4152-9687-3812b901345b-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994399 5113 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994457 5113 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994520 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p488k\" (UniqueName: \"kubernetes.io/projected/04de374b-ab52-4152-9687-3812b901345b-kube-api-access-p488k\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994619 5113 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04de374b-ab52-4152-9687-3812b901345b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:07 crc kubenswrapper[5113]: I0121 09:21:07.994677 5113 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04de374b-ab52-4152-9687-3812b901345b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rpv6k\" (UniqueName: \"kubernetes.io/projected/8676753a-d0b3-4ace-bd2d-96e00bd08db2-kube-api-access-rpv6k\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096310 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676753a-d0b3-4ace-bd2d-96e00bd08db2-serving-cert\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096366 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-proxy-ca-bundles\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8676753a-d0b3-4ace-bd2d-96e00bd08db2-tmp\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096484 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-client-ca\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.096905 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-config\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.098319 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-proxy-ca-bundles\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.099011 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-config\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.099278 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8676753a-d0b3-4ace-bd2d-96e00bd08db2-tmp\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.100289 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8676753a-d0b3-4ace-bd2d-96e00bd08db2-client-ca\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.101074 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8676753a-d0b3-4ace-bd2d-96e00bd08db2-serving-cert\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.129492 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpv6k\" (UniqueName: \"kubernetes.io/projected/8676753a-d0b3-4ace-bd2d-96e00bd08db2-kube-api-access-rpv6k\") pod \"controller-manager-db999bdc8-bqr4n\" (UID: \"8676753a-d0b3-4ace-bd2d-96e00bd08db2\") " pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.154008 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.264034 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc"] Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.378359 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-db999bdc8-bqr4n"] Jan 21 09:21:08 crc kubenswrapper[5113]: W0121 09:21:08.396950 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8676753a_d0b3_4ace_bd2d_96e00bd08db2.slice/crio-f3b995417015853df7c70acda4bda9b3881c9bd708c4ead3a4486c394bfb3ca7 WatchSource:0}: Error finding container f3b995417015853df7c70acda4bda9b3881c9bd708c4ead3a4486c394bfb3ca7: Status 404 returned error can't find the container with id f3b995417015853df7c70acda4bda9b3881c9bd708c4ead3a4486c394bfb3ca7 Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.632632 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" event={"ID":"04de374b-ab52-4152-9687-3812b901345b","Type":"ContainerDied","Data":"932214f12cbe32648832d1f75a3f68f880f204cb5ebcbc9287827afc7946abe9"} Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.632656 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c88f6fd8d-96dr8" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.633077 5113 scope.go:117] "RemoveContainer" containerID="b81c48efe0d22c0feb81108d1bafa755d83e78a3af0bcea6524f6a6d20f3bd9b" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.635108 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" event={"ID":"8676753a-d0b3-4ace-bd2d-96e00bd08db2","Type":"ContainerStarted","Data":"356d45d6442f33ad0aafd3e1643bf430b910931391f02a62b3bb0fad1968f6da"} Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.635176 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" event={"ID":"8676753a-d0b3-4ace-bd2d-96e00bd08db2","Type":"ContainerStarted","Data":"f3b995417015853df7c70acda4bda9b3881c9bd708c4ead3a4486c394bfb3ca7"} Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.635209 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.637764 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" event={"ID":"f6362975-2d39-4357-9222-ba2387414081","Type":"ContainerStarted","Data":"61cfd17d5536297cbcda47acba0c4bf78277cecbdb7859db2a5f9ac5e83dabfa"} Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.637815 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" event={"ID":"f6362975-2d39-4357-9222-ba2387414081","Type":"ContainerStarted","Data":"ac33c004824f51fe6c14c684a1cab91d40e9fc0a4cb171bab156677798df83b9"} Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.638331 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.662713 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" podStartSLOduration=2.66269861 podStartE2EDuration="2.66269861s" podCreationTimestamp="2026-01-21 09:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:21:08.659763218 +0000 UTC m=+198.160590327" watchObservedRunningTime="2026-01-21 09:21:08.66269861 +0000 UTC m=+198.163525659" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.683299 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" podStartSLOduration=2.6832790859999998 podStartE2EDuration="2.683279086s" podCreationTimestamp="2026-01-21 09:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:21:08.682430952 +0000 UTC m=+198.183258011" watchObservedRunningTime="2026-01-21 09:21:08.683279086 +0000 UTC m=+198.184106135" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.697046 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.700602 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c88f6fd8d-96dr8"] Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.866030 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04de374b-ab52-4152-9687-3812b901345b" path="/var/lib/kubelet/pods/04de374b-ab52-4152-9687-3812b901345b/volumes" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.866773 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="371e7aac-b969-41f2-af39-4dfb9ee44bbb" path="/var/lib/kubelet/pods/371e7aac-b969-41f2-af39-4dfb9ee44bbb/volumes" Jan 21 09:21:08 crc kubenswrapper[5113]: I0121 09:21:08.977285 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-687f48bfdf-fwqmc" Jan 21 09:21:09 crc kubenswrapper[5113]: I0121 09:21:09.635844 5113 patch_prober.go:28] interesting pod/controller-manager-db999bdc8-bqr4n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" start-of-body= Jan 21 09:21:09 crc kubenswrapper[5113]: I0121 09:21:09.635937 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" podUID="8676753a-d0b3-4ace-bd2d-96e00bd08db2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" Jan 21 09:21:09 crc kubenswrapper[5113]: I0121 09:21:09.661923 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-db999bdc8-bqr4n" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.819555 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820114 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820663 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1" gracePeriod=15 Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820663 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40" gracePeriod=15 Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820703 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a" gracePeriod=15 Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820772 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236" gracePeriod=15 Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820848 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a" gracePeriod=15 Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820943 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820959 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820972 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820979 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820989 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.820996 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821006 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821013 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821024 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821031 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821043 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821050 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821062 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821069 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821088 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821094 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821215 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821234 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821245 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821255 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821267 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821278 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821289 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821298 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821407 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821416 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821428 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821435 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.821571 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.829519 5113 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.836633 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.839659 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.901810 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.901868 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.901947 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902015 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902037 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902057 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902085 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902139 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902159 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.902221 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: I0121 09:21:20.909499 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:20 crc kubenswrapper[5113]: E0121 09:21:20.910150 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003093 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003157 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003188 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003236 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003322 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003354 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003394 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003438 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003467 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003485 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003520 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003576 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003684 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003761 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003809 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.003930 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.004002 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.004040 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.004307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.004351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.211286 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: W0121 09:21:21.232966 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-58296b46d4e2685ce7236beabd77462f47f5e6a72169273b726380989f70a529 WatchSource:0}: Error finding container 58296b46d4e2685ce7236beabd77462f47f5e6a72169273b726380989f70a529: Status 404 returned error can't find the container with id 58296b46d4e2685ce7236beabd77462f47f5e6a72169273b726380989f70a529 Jan 21 09:21:21 crc kubenswrapper[5113]: E0121 09:21:21.235643 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb48f4e361ae0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,LastTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.748877 5113 generic.go:358] "Generic (PLEG): container finished" podID="5ebafce1-e146-4504-982d-5d5a30f42c6f" containerID="8fa6b0373442d86b53071e1cc261ea97cac446a064d771e5300a23acd0f25870" exitCode=0 Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.748941 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5ebafce1-e146-4504-982d-5d5a30f42c6f","Type":"ContainerDied","Data":"8fa6b0373442d86b53071e1cc261ea97cac446a064d771e5300a23acd0f25870"} Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.750326 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.751873 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.753619 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.754391 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40" exitCode=0 Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.754422 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a" exitCode=0 Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.754433 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236" exitCode=0 Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.754444 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a" exitCode=2 Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.754485 5113 scope.go:117] "RemoveContainer" containerID="383eb31f942f4a72a515ee030cd46d5e1130d7d74a8927d5daa09c8d744a67f6" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.757009 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4"} Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.757064 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"58296b46d4e2685ce7236beabd77462f47f5e6a72169273b726380989f70a529"} Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.757413 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: I0121 09:21:21.757752 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:21 crc kubenswrapper[5113]: E0121 09:21:21.758361 5113 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:21:21 crc kubenswrapper[5113]: E0121 09:21:21.886673 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb48f4e361ae0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,LastTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:21:22 crc kubenswrapper[5113]: I0121 09:21:22.768170 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.236182 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.237550 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.238299 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.238870 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.240646 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.241187 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.241897 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.364713 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock\") pod \"5ebafce1-e146-4504-982d-5d5a30f42c6f\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.364834 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.364867 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.364900 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access\") pod \"5ebafce1-e146-4504-982d-5d5a30f42c6f\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.364989 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365037 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365072 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir\") pod \"5ebafce1-e146-4504-982d-5d5a30f42c6f\" (UID: \"5ebafce1-e146-4504-982d-5d5a30f42c6f\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365069 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365079 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock" (OuterVolumeSpecName: "var-lock") pod "5ebafce1-e146-4504-982d-5d5a30f42c6f" (UID: "5ebafce1-e146-4504-982d-5d5a30f42c6f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365132 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ebafce1-e146-4504-982d-5d5a30f42c6f" (UID: "5ebafce1-e146-4504-982d-5d5a30f42c6f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365168 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.365849 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.366095 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.366224 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.366942 5113 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.366980 5113 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.366997 5113 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.367014 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ebafce1-e146-4504-982d-5d5a30f42c6f-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.367031 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.367047 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.371954 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.381135 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ebafce1-e146-4504-982d-5d5a30f42c6f" (UID: "5ebafce1-e146-4504-982d-5d5a30f42c6f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.469374 5113 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.469449 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ebafce1-e146-4504-982d-5d5a30f42c6f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.778950 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.779880 5113 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1" exitCode=0 Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.779947 5113 scope.go:117] "RemoveContainer" containerID="b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.780023 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.783509 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"5ebafce1-e146-4504-982d-5d5a30f42c6f","Type":"ContainerDied","Data":"55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8"} Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.783552 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a657d567c24b4426dd6997c35873d04b9dbfa4f5559290557ca847cac3fde8" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.783645 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.801359 5113 scope.go:117] "RemoveContainer" containerID="ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.812910 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.813234 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.815006 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.815488 5113 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.824148 5113 scope.go:117] "RemoveContainer" containerID="a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.842005 5113 scope.go:117] "RemoveContainer" containerID="8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.866166 5113 scope.go:117] "RemoveContainer" containerID="f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.899434 5113 scope.go:117] "RemoveContainer" containerID="8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.975709 5113 scope.go:117] "RemoveContainer" containerID="b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.976277 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40\": container with ID starting with b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40 not found: ID does not exist" containerID="b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.976344 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40"} err="failed to get container status \"b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40\": rpc error: code = NotFound desc = could not find container \"b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40\": container with ID starting with b1ce6cf47c3de0a268370a6bb606537c024693cefd424a31a593f0f3863d2f40 not found: ID does not exist" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.976385 5113 scope.go:117] "RemoveContainer" containerID="ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.976926 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a\": container with ID starting with ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a not found: ID does not exist" containerID="ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.976977 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a"} err="failed to get container status \"ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a\": rpc error: code = NotFound desc = could not find container \"ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a\": container with ID starting with ea73520647d1029136429fbd1dd2f9ae77c16ccdd5d18b96557ba585203bc15a not found: ID does not exist" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.977009 5113 scope.go:117] "RemoveContainer" containerID="a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.977329 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236\": container with ID starting with a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236 not found: ID does not exist" containerID="a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.977390 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236"} err="failed to get container status \"a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236\": rpc error: code = NotFound desc = could not find container \"a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236\": container with ID starting with a5d9b314f85de77e4faf2c3725f9ac3bbf0b3efa5333a458592300fe5fadb236 not found: ID does not exist" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.977410 5113 scope.go:117] "RemoveContainer" containerID="8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.977844 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a\": container with ID starting with 8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a not found: ID does not exist" containerID="8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.977873 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a"} err="failed to get container status \"8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a\": rpc error: code = NotFound desc = could not find container \"8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a\": container with ID starting with 8c21a85eeaadf7c1ac610c91b634072cccf19ee75ba906cbfb9422538406201a not found: ID does not exist" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.977891 5113 scope.go:117] "RemoveContainer" containerID="f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.978157 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1\": container with ID starting with f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1 not found: ID does not exist" containerID="f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.978192 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1"} err="failed to get container status \"f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1\": rpc error: code = NotFound desc = could not find container \"f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1\": container with ID starting with f55202dae8577531752698ed58e25d96faab357fd47c7e1214e97d227c27dec1 not found: ID does not exist" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.978212 5113 scope.go:117] "RemoveContainer" containerID="8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477" Jan 21 09:21:23 crc kubenswrapper[5113]: E0121 09:21:23.978669 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\": container with ID starting with 8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477 not found: ID does not exist" containerID="8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477" Jan 21 09:21:23 crc kubenswrapper[5113]: I0121 09:21:23.978704 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477"} err="failed to get container status \"8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\": rpc error: code = NotFound desc = could not find container \"8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477\": container with ID starting with 8f36f477e5d68cc57d9416681cfa4f9bf3ddce9fcd5eabc6232df87d40fa2477 not found: ID does not exist" Jan 21 09:21:24 crc kubenswrapper[5113]: I0121 09:21:24.856913 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.509599 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.510391 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.511365 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.511933 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.512379 5113 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:25 crc kubenswrapper[5113]: I0121 09:21:25.512419 5113 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.513022 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="200ms" Jan 21 09:21:25 crc kubenswrapper[5113]: E0121 09:21:25.714269 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="400ms" Jan 21 09:21:26 crc kubenswrapper[5113]: E0121 09:21:26.115998 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="800ms" Jan 21 09:21:26 crc kubenswrapper[5113]: E0121 09:21:26.917941 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="1.6s" Jan 21 09:21:28 crc kubenswrapper[5113]: I0121 09:21:28.340153 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:21:28 crc kubenswrapper[5113]: I0121 09:21:28.340242 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:21:28 crc kubenswrapper[5113]: E0121 09:21:28.519311 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="3.2s" Jan 21 09:21:30 crc kubenswrapper[5113]: I0121 09:21:30.853449 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:31 crc kubenswrapper[5113]: E0121 09:21:31.722447 5113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.181:6443: connect: connection refused" interval="6.4s" Jan 21 09:21:31 crc kubenswrapper[5113]: E0121 09:21:31.888118 5113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.181:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb48f4e361ae0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,LastTimestamp:2026-01-21 09:21:21.235253984 +0000 UTC m=+210.736081033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 09:21:32 crc kubenswrapper[5113]: I0121 09:21:32.844049 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:32 crc kubenswrapper[5113]: I0121 09:21:32.845506 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:32 crc kubenswrapper[5113]: I0121 09:21:32.866268 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:32 crc kubenswrapper[5113]: I0121 09:21:32.866316 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:32 crc kubenswrapper[5113]: E0121 09:21:32.866971 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:32 crc kubenswrapper[5113]: I0121 09:21:32.867471 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:32 crc kubenswrapper[5113]: W0121 09:21:32.891528 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-64790135f9120aa6ea0362d7157a91f0af5183b1c2916b199789491608c751cf WatchSource:0}: Error finding container 64790135f9120aa6ea0362d7157a91f0af5183b1c2916b199789491608c751cf: Status 404 returned error can't find the container with id 64790135f9120aa6ea0362d7157a91f0af5183b1c2916b199789491608c751cf Jan 21 09:21:32 crc kubenswrapper[5113]: E0121 09:21:32.903167 5113 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" volumeName="registry-storage" Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.873475 5113 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="1db47eafbb4b9890b1fa74bd7b806da6740e387b1019c207b5d22434c7c65089" exitCode=0 Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.874067 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"1db47eafbb4b9890b1fa74bd7b806da6740e387b1019c207b5d22434c7c65089"} Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.874163 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"64790135f9120aa6ea0362d7157a91f0af5183b1c2916b199789491608c751cf"} Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.874675 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.874702 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:33 crc kubenswrapper[5113]: I0121 09:21:33.875291 5113 status_manager.go:895] "Failed to get status for pod" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" Jan 21 09:21:33 crc kubenswrapper[5113]: E0121 09:21:33.875332 5113 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.181:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:34 crc kubenswrapper[5113]: I0121 09:21:34.906721 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"342ec9554b62041386eaf49d84ebbe0eb2cf5d70f471e756d17590f06908bc40"} Jan 21 09:21:34 crc kubenswrapper[5113]: I0121 09:21:34.907217 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"20953763f62d55000de871d6b61e4fbae0ec15055a918942ef6b856a9d8a567b"} Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.913073 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.913114 5113 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="8b462f33795ed36c96eb82d0605e3d0d75cda8a208712e5a08bbe1199b460457" exitCode=1 Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.913233 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"8b462f33795ed36c96eb82d0605e3d0d75cda8a208712e5a08bbe1199b460457"} Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.913859 5113 scope.go:117] "RemoveContainer" containerID="8b462f33795ed36c96eb82d0605e3d0d75cda8a208712e5a08bbe1199b460457" Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.916846 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"f3e85c77091ce9e094575cbb0a92fdd0f36c0b7e02725884452060db84a1f7e2"} Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.916888 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"12bbada086a1f292da7c4996c338aefcc2c709c7869d81592312447c9645fd56"} Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.916898 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3437aa143f33070d242973016891c55996a011efb2bf1033d1cadf1191437eca"} Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.917115 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.917197 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:35 crc kubenswrapper[5113]: I0121 09:21:35.917218 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:36 crc kubenswrapper[5113]: I0121 09:21:36.390413 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:21:36 crc kubenswrapper[5113]: I0121 09:21:36.925797 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:21:36 crc kubenswrapper[5113]: I0121 09:21:36.926113 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d7fb9fd2bb69c87652aed0c474599fe3ea941640ddfb1ec04613fa7ce0344ea8"} Jan 21 09:21:37 crc kubenswrapper[5113]: I0121 09:21:37.867713 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:37 crc kubenswrapper[5113]: I0121 09:21:37.867775 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:37 crc kubenswrapper[5113]: I0121 09:21:37.878829 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.708148 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.930864 5113 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.930900 5113 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.949329 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.949364 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.954310 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:40 crc kubenswrapper[5113]: I0121 09:21:40.994485 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="236d5f36-19c9-4ffb-853e-a382bd250ed2" Jan 21 09:21:41 crc kubenswrapper[5113]: I0121 09:21:41.956335 5113 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:41 crc kubenswrapper[5113]: I0121 09:21:41.956381 5113 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="56bbb3a4-33c5-4edf-b331-6c8de091efa8" Jan 21 09:21:41 crc kubenswrapper[5113]: I0121 09:21:41.959337 5113 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="236d5f36-19c9-4ffb-853e-a382bd250ed2" Jan 21 09:21:44 crc kubenswrapper[5113]: I0121 09:21:44.564175 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:21:44 crc kubenswrapper[5113]: I0121 09:21:44.571559 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:21:51 crc kubenswrapper[5113]: I0121 09:21:51.029609 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 21 09:21:51 crc kubenswrapper[5113]: I0121 09:21:51.256216 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:21:51 crc kubenswrapper[5113]: I0121 09:21:51.640787 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 21 09:21:51 crc kubenswrapper[5113]: I0121 09:21:51.875088 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 21 09:21:51 crc kubenswrapper[5113]: I0121 09:21:51.925286 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 21 09:21:52 crc kubenswrapper[5113]: I0121 09:21:52.160648 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 21 09:21:52 crc kubenswrapper[5113]: I0121 09:21:52.219642 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:21:52 crc kubenswrapper[5113]: I0121 09:21:52.549543 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 21 09:21:52 crc kubenswrapper[5113]: I0121 09:21:52.608851 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:52 crc kubenswrapper[5113]: I0121 09:21:52.632962 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.317556 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.566676 5113 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.569989 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.577685 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.577828 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.586852 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.608257 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.608228964 podStartE2EDuration="13.608228964s" podCreationTimestamp="2026-01-21 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:21:53.605380124 +0000 UTC m=+243.106207203" watchObservedRunningTime="2026-01-21 09:21:53.608228964 +0000 UTC m=+243.109056043" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.717552 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 21 09:21:53 crc kubenswrapper[5113]: I0121 09:21:53.782382 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.091340 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.233081 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.256192 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.280479 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.385075 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.400823 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.427511 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.481930 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.531894 5113 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.557928 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.707776 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.722494 5113 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.755992 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.787062 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.811829 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.865486 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.935485 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.955519 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 21 09:21:54 crc kubenswrapper[5113]: I0121 09:21:54.988368 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.044291 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.087405 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.108251 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.240572 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.276666 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.313432 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.334917 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.435659 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.447241 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.461612 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.532617 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.587422 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.749265 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.808581 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.878502 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.935285 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.966857 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:21:55 crc kubenswrapper[5113]: I0121 09:21:55.974982 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.075265 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.117022 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.169001 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.175848 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.261598 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.317802 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.397840 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.435964 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.442746 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.451815 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.488058 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.563706 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.580483 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.606933 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.666663 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.847447 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.858067 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.903910 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.912416 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.937405 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 21 09:21:56 crc kubenswrapper[5113]: I0121 09:21:56.975304 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.109102 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.194240 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.219099 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.241141 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.249537 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.294029 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.344422 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.408029 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.453019 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.529139 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.547078 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.685014 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.712510 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.713913 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.806965 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.815767 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.865839 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.886801 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.954193 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 21 09:21:57 crc kubenswrapper[5113]: I0121 09:21:57.965586 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.132248 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.256649 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.339921 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.340006 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.343994 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.384839 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.385298 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.404722 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.494565 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.543139 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.569143 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.697130 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.848169 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 21 09:21:58 crc kubenswrapper[5113]: I0121 09:21:58.955484 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.006573 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.080873 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.100683 5113 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.164298 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.183126 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.223498 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.379245 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.384455 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.415759 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.497320 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.511089 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.550829 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.631359 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.664452 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.673773 5113 ???:1] "http: TLS handshake error from 192.168.126.11:51428: no serving certificate available for the kubelet" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.702098 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.747764 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 21 09:21:59 crc kubenswrapper[5113]: I0121 09:21:59.827911 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.065985 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.110480 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.216320 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.240071 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.263969 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.310346 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.415239 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.431455 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.484674 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.591177 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.691644 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.828466 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.915435 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.926555 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 21 09:22:00 crc kubenswrapper[5113]: I0121 09:22:00.971456 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.039853 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.077353 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.141401 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.175797 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.263244 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.425198 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.461359 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.518135 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.595469 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.614495 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.666897 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.720809 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.805116 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.811959 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.822320 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.885429 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.928387 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.988057 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:01 crc kubenswrapper[5113]: I0121 09:22:01.999571 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.007244 5113 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.019258 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.074970 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.130122 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.172343 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.204356 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.206985 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.241018 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.257589 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.265597 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.311776 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.387034 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.391644 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.441665 5113 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.467420 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.516850 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.570399 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.594327 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.608633 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.712975 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.875789 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 21 09:22:02 crc kubenswrapper[5113]: I0121 09:22:02.877351 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.008663 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.104830 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.112819 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.128396 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.131156 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.202949 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.314038 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.345105 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.386271 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.410303 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.428109 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.456711 5113 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.457231 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4" gracePeriod=5 Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.488567 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.522435 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.534545 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.595135 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.684223 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.685471 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.719391 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.788449 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.861867 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.891945 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.909516 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.912430 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.936320 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 21 09:22:03 crc kubenswrapper[5113]: I0121 09:22:03.955828 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.045880 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.087992 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.279102 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.281272 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.334703 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.374958 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.389023 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.474819 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.477906 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.488210 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.545371 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.577230 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.676020 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 21 09:22:04 crc kubenswrapper[5113]: I0121 09:22:04.759879 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.060103 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.073369 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.142361 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.193259 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.266362 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.362154 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.367076 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.621021 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.653347 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 21 09:22:05 crc kubenswrapper[5113]: I0121 09:22:05.810952 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.000040 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.060344 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.085611 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.090495 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.096701 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.161630 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.195378 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.226899 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.407126 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.507549 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.695174 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.713152 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 21 09:22:06 crc kubenswrapper[5113]: I0121 09:22:06.867909 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.000517 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.211278 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.264097 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.410811 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.681307 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.724103 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:22:07 crc kubenswrapper[5113]: I0121 09:22:07.979335 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 21 09:22:08 crc kubenswrapper[5113]: I0121 09:22:08.141456 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 21 09:22:08 crc kubenswrapper[5113]: I0121 09:22:08.965440 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.046163 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.059096 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.059202 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.061146 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110086 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110181 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110318 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110393 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110434 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110354 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110468 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110505 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.110511 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.111049 5113 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.111084 5113 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.111107 5113 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.111130 5113 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.120804 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.148507 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.148555 5113 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4" exitCode=137 Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.148639 5113 scope.go:117] "RemoveContainer" containerID="4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.148807 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.177181 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.181043 5113 scope.go:117] "RemoveContainer" containerID="4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4" Jan 21 09:22:09 crc kubenswrapper[5113]: E0121 09:22:09.181445 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4\": container with ID starting with 4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4 not found: ID does not exist" containerID="4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.181483 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4"} err="failed to get container status \"4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4\": rpc error: code = NotFound desc = could not find container \"4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4\": container with ID starting with 4e3d199fab8d276e5c7f984c23ce39dd686acc2b372a501174c9b11f0cdda7f4 not found: ID does not exist" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.212532 5113 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.347700 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 21 09:22:09 crc kubenswrapper[5113]: I0121 09:22:09.676824 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 21 09:22:10 crc kubenswrapper[5113]: I0121 09:22:10.186290 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 21 09:22:10 crc kubenswrapper[5113]: I0121 09:22:10.253869 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 21 09:22:10 crc kubenswrapper[5113]: I0121 09:22:10.391643 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 21 09:22:10 crc kubenswrapper[5113]: I0121 09:22:10.849782 5113 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 21 09:22:10 crc kubenswrapper[5113]: I0121 09:22:10.851140 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 21 09:22:21 crc kubenswrapper[5113]: I0121 09:22:21.226116 5113 generic.go:358] "Generic (PLEG): container finished" podID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerID="b6fc78891198b1aae3c811ea72d652b8b660b4df94fde6c26f6ca98f75021677" exitCode=0 Jan 21 09:22:21 crc kubenswrapper[5113]: I0121 09:22:21.226305 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerDied","Data":"b6fc78891198b1aae3c811ea72d652b8b660b4df94fde6c26f6ca98f75021677"} Jan 21 09:22:21 crc kubenswrapper[5113]: I0121 09:22:21.226973 5113 scope.go:117] "RemoveContainer" containerID="b6fc78891198b1aae3c811ea72d652b8b660b4df94fde6c26f6ca98f75021677" Jan 21 09:22:21 crc kubenswrapper[5113]: I0121 09:22:21.606700 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:22:22 crc kubenswrapper[5113]: I0121 09:22:22.234637 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerStarted","Data":"3659fd5be751b585b45b3eb342f36fe90005ba36c6f95636f4eb062a039fb08a"} Jan 21 09:22:22 crc kubenswrapper[5113]: I0121 09:22:22.234724 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:22:22 crc kubenswrapper[5113]: I0121 09:22:22.236763 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:22:28 crc kubenswrapper[5113]: I0121 09:22:28.340394 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:22:28 crc kubenswrapper[5113]: I0121 09:22:28.341213 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:22:28 crc kubenswrapper[5113]: I0121 09:22:28.341283 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:22:28 crc kubenswrapper[5113]: I0121 09:22:28.342209 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:22:28 crc kubenswrapper[5113]: I0121 09:22:28.342354 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7" gracePeriod=600 Jan 21 09:22:29 crc kubenswrapper[5113]: I0121 09:22:29.036291 5113 ???:1] "http: TLS handshake error from 192.168.126.11:49228: no serving certificate available for the kubelet" Jan 21 09:22:29 crc kubenswrapper[5113]: I0121 09:22:29.291940 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7" exitCode=0 Jan 21 09:22:29 crc kubenswrapper[5113]: I0121 09:22:29.292056 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7"} Jan 21 09:22:29 crc kubenswrapper[5113]: I0121 09:22:29.292132 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248"} Jan 21 09:22:39 crc kubenswrapper[5113]: I0121 09:22:39.436302 5113 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 09:22:40 crc kubenswrapper[5113]: I0121 09:22:40.314449 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:22:51 crc kubenswrapper[5113]: I0121 09:22:51.053048 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:22:51 crc kubenswrapper[5113]: I0121 09:22:51.053848 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.339923 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" podUID="92e0cb66-e547-44e5-a384-bd522e554577" containerName="oauth-openshift" containerID="cri-o://4c8fe81d86c85b10457f62f2c42c30e546dd24f124e2fd302fc9e010638ac1d6" gracePeriod=15 Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.500812 5113 generic.go:358] "Generic (PLEG): container finished" podID="92e0cb66-e547-44e5-a384-bd522e554577" containerID="4c8fe81d86c85b10457f62f2c42c30e546dd24f124e2fd302fc9e010638ac1d6" exitCode=0 Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.500936 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" event={"ID":"92e0cb66-e547-44e5-a384-bd522e554577","Type":"ContainerDied","Data":"4c8fe81d86c85b10457f62f2c42c30e546dd24f124e2fd302fc9e010638ac1d6"} Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.828767 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.878400 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b65fcf68f-w44q2"] Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879311 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92e0cb66-e547-44e5-a384-bd522e554577" containerName="oauth-openshift" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879421 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e0cb66-e547-44e5-a384-bd522e554577" containerName="oauth-openshift" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879515 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" containerName="installer" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879602 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" containerName="installer" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879718 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.879830 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.880036 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ebafce1-e146-4504-982d-5d5a30f42c6f" containerName="installer" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.880129 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.880216 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="92e0cb66-e547-44e5-a384-bd522e554577" containerName="oauth-openshift" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.884232 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:05 crc kubenswrapper[5113]: I0121 09:23:05.895882 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b65fcf68f-w44q2"] Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.000938 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001023 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001050 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001106 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001140 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001167 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlth5\" (UniqueName: \"kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001232 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001286 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001343 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001380 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001414 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001456 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001494 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection\") pod \"92e0cb66-e547-44e5-a384-bd522e554577\" (UID: \"92e0cb66-e547-44e5-a384-bd522e554577\") " Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001627 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-login\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001672 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001701 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtnv8\" (UniqueName: \"kubernetes.io/projected/80d7dc27-296f-4ba6-9348-dd6df0bed942-kube-api-access-gtnv8\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001758 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001791 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001886 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001913 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-session\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001939 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001966 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.001992 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.002018 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-policies\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.002071 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.002099 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-dir\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.002120 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-error\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.003226 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.003667 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.004068 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.004538 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.005433 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.009849 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.010240 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.010429 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5" (OuterVolumeSpecName: "kube-api-access-vlth5") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "kube-api-access-vlth5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.011162 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.012424 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.012521 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.012230 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.013056 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.013201 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "92e0cb66-e547-44e5-a384-bd522e554577" (UID: "92e0cb66-e547-44e5-a384-bd522e554577"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.114518 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.114663 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtnv8\" (UniqueName: \"kubernetes.io/projected/80d7dc27-296f-4ba6-9348-dd6df0bed942-kube-api-access-gtnv8\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.114799 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.114888 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115026 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115108 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-session\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115181 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115266 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115349 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115505 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-policies\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115641 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115719 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-dir\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115825 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-error\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.115907 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-login\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117481 5113 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/92e0cb66-e547-44e5-a384-bd522e554577-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117527 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117559 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117601 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117632 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117661 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117728 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117790 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117819 5113 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117849 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vlth5\" (UniqueName: \"kubernetes.io/projected/92e0cb66-e547-44e5-a384-bd522e554577-kube-api-access-vlth5\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117888 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117916 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117946 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.117974 5113 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/92e0cb66-e547-44e5-a384-bd522e554577-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.119939 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.121249 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.121813 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-dir\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.121962 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.122631 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-audit-policies\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.123171 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.123327 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.123610 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.123946 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.124069 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-login\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.130612 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.132343 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-user-template-error\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.135193 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/80d7dc27-296f-4ba6-9348-dd6df0bed942-v4-0-config-system-session\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.144991 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtnv8\" (UniqueName: \"kubernetes.io/projected/80d7dc27-296f-4ba6-9348-dd6df0bed942-kube-api-access-gtnv8\") pod \"oauth-openshift-5b65fcf68f-w44q2\" (UID: \"80d7dc27-296f-4ba6-9348-dd6df0bed942\") " pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.209513 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.508442 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" event={"ID":"92e0cb66-e547-44e5-a384-bd522e554577","Type":"ContainerDied","Data":"e8077c5b3cb0cb6d2b968a95b990e63eae37abfa40f0bc64770f3b57b3c8a2ea"} Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.508618 5113 scope.go:117] "RemoveContainer" containerID="4c8fe81d86c85b10457f62f2c42c30e546dd24f124e2fd302fc9e010638ac1d6" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.508465 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-f74j6" Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.549447 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.556896 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-f74j6"] Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.702783 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b65fcf68f-w44q2"] Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.711635 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:23:06 crc kubenswrapper[5113]: I0121 09:23:06.851357 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e0cb66-e547-44e5-a384-bd522e554577" path="/var/lib/kubelet/pods/92e0cb66-e547-44e5-a384-bd522e554577/volumes" Jan 21 09:23:07 crc kubenswrapper[5113]: I0121 09:23:07.518169 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" event={"ID":"80d7dc27-296f-4ba6-9348-dd6df0bed942","Type":"ContainerStarted","Data":"c00dcd9714a4b835dfd46bcf08ea917874a01a05ac654356aeb3a6a73b98083b"} Jan 21 09:23:07 crc kubenswrapper[5113]: I0121 09:23:07.518212 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" event={"ID":"80d7dc27-296f-4ba6-9348-dd6df0bed942","Type":"ContainerStarted","Data":"f4752c2e09769fed45fc1f33cedec20d12dd05c92f50a438b084d38fb922317d"} Jan 21 09:23:07 crc kubenswrapper[5113]: I0121 09:23:07.518686 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:07 crc kubenswrapper[5113]: I0121 09:23:07.524269 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" Jan 21 09:23:07 crc kubenswrapper[5113]: I0121 09:23:07.548599 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b65fcf68f-w44q2" podStartSLOduration=27.548579874 podStartE2EDuration="27.548579874s" podCreationTimestamp="2026-01-21 09:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:23:07.545213922 +0000 UTC m=+317.046040971" watchObservedRunningTime="2026-01-21 09:23:07.548579874 +0000 UTC m=+317.049406923" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.373587 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.374723 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vlng9" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="registry-server" containerID="cri-o://428ba9dd346a511bf265107b1cfdbcb274364984449155e2e7682b2a21be7644" gracePeriod=30 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.387571 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.388108 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-frj7n" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="registry-server" containerID="cri-o://a1b545b4ef756a2a4e15460361651faf9ce3d1c59328292f061b590256135999" gracePeriod=30 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.400519 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.400928 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" containerID="cri-o://3659fd5be751b585b45b3eb342f36fe90005ba36c6f95636f4eb062a039fb08a" gracePeriod=30 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.409481 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.410058 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b57t5" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="registry-server" containerID="cri-o://e076047b8ae1d88074199f9aec38b44fa0d755743baceed407122b4743f17e31" gracePeriod=30 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.413871 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.414357 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkwr7" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="registry-server" containerID="cri-o://4810fa84e93f3966973bb37e473e42788a16e3165715e1bac7a758696134cd74" gracePeriod=30 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.421110 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w69cp"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.438219 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w69cp"] Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.438367 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.528289 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.528633 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87w5n\" (UniqueName: \"kubernetes.io/projected/705df55e-0346-4051-a2db-cba821b3ef8c-kube-api-access-87w5n\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.528665 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/705df55e-0346-4051-a2db-cba821b3ef8c-tmp\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.528725 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.629709 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.629831 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.629870 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-87w5n\" (UniqueName: \"kubernetes.io/projected/705df55e-0346-4051-a2db-cba821b3ef8c-kube-api-access-87w5n\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.629907 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/705df55e-0346-4051-a2db-cba821b3ef8c-tmp\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.630669 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/705df55e-0346-4051-a2db-cba821b3ef8c-tmp\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.631471 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.638178 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/705df55e-0346-4051-a2db-cba821b3ef8c-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.666291 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-87w5n\" (UniqueName: \"kubernetes.io/projected/705df55e-0346-4051-a2db-cba821b3ef8c-kube-api-access-87w5n\") pod \"marketplace-operator-547dbd544d-w69cp\" (UID: \"705df55e-0346-4051-a2db-cba821b3ef8c\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.757773 5113 generic.go:358] "Generic (PLEG): container finished" podID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerID="428ba9dd346a511bf265107b1cfdbcb274364984449155e2e7682b2a21be7644" exitCode=0 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.757847 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerDied","Data":"428ba9dd346a511bf265107b1cfdbcb274364984449155e2e7682b2a21be7644"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.757887 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vlng9" event={"ID":"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544","Type":"ContainerDied","Data":"a8aa0cb155f1829bbeb05b6eebfa97a3bf420e5c337be2776d00d72c057b6385"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.757899 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8aa0cb155f1829bbeb05b6eebfa97a3bf420e5c337be2776d00d72c057b6385" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.759411 5113 generic.go:358] "Generic (PLEG): container finished" podID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerID="4810fa84e93f3966973bb37e473e42788a16e3165715e1bac7a758696134cd74" exitCode=0 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.759513 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerDied","Data":"4810fa84e93f3966973bb37e473e42788a16e3165715e1bac7a758696134cd74"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.761255 5113 generic.go:358] "Generic (PLEG): container finished" podID="2097e4fe-30fc-4341-90b7-14877224a474" containerID="a1b545b4ef756a2a4e15460361651faf9ce3d1c59328292f061b590256135999" exitCode=0 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.761297 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerDied","Data":"a1b545b4ef756a2a4e15460361651faf9ce3d1c59328292f061b590256135999"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.761315 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-frj7n" event={"ID":"2097e4fe-30fc-4341-90b7-14877224a474","Type":"ContainerDied","Data":"31c56fcaef324debce60b00ef733ee7683c2eec53b3d125a9a2069e126cfbc9d"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.761325 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c56fcaef324debce60b00ef733ee7683c2eec53b3d125a9a2069e126cfbc9d" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.762655 5113 generic.go:358] "Generic (PLEG): container finished" podID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerID="e076047b8ae1d88074199f9aec38b44fa0d755743baceed407122b4743f17e31" exitCode=0 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.762792 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerDied","Data":"e076047b8ae1d88074199f9aec38b44fa0d755743baceed407122b4743f17e31"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.763963 5113 generic.go:358] "Generic (PLEG): container finished" podID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerID="3659fd5be751b585b45b3eb342f36fe90005ba36c6f95636f4eb062a039fb08a" exitCode=0 Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.763994 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerDied","Data":"3659fd5be751b585b45b3eb342f36fe90005ba36c6f95636f4eb062a039fb08a"} Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.764014 5113 scope.go:117] "RemoveContainer" containerID="b6fc78891198b1aae3c811ea72d652b8b660b4df94fde6c26f6ca98f75021677" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.843281 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.848410 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.850618 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.857828 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.859528 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.863222 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.942456 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfxpb\" (UniqueName: \"kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb\") pod \"2097e4fe-30fc-4341-90b7-14877224a474\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.943347 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities\") pod \"2097e4fe-30fc-4341-90b7-14877224a474\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.944104 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca\") pod \"8f7e099d-81da-48a0-bf4a-c152167e8f40\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.944153 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content\") pod \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.944702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics\") pod \"8f7e099d-81da-48a0-bf4a-c152167e8f40\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.944778 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities\") pod \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.946438 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content\") pod \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.946892 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db25r\" (UniqueName: \"kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r\") pod \"f127893b-ca79-46cf-b50d-de1d623cdc3f\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.947299 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8f7e099d-81da-48a0-bf4a-c152167e8f40" (UID: "8f7e099d-81da-48a0-bf4a-c152167e8f40"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.947447 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities" (OuterVolumeSpecName: "utilities") pod "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" (UID: "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.948935 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp\") pod \"8f7e099d-81da-48a0-bf4a-c152167e8f40\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.949235 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities\") pod \"f127893b-ca79-46cf-b50d-de1d623cdc3f\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.949330 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkbg6\" (UniqueName: \"kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6\") pod \"8f7e099d-81da-48a0-bf4a-c152167e8f40\" (UID: \"8f7e099d-81da-48a0-bf4a-c152167e8f40\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.949440 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djtzm\" (UniqueName: \"kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm\") pod \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.949537 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85d9w\" (UniqueName: \"kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w\") pod \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\" (UID: \"f864d6cd-a6bf-4de0-ad26-ed72ff0c0544\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.950126 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content\") pod \"f127893b-ca79-46cf-b50d-de1d623cdc3f\" (UID: \"f127893b-ca79-46cf-b50d-de1d623cdc3f\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.950294 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities\") pod \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\" (UID: \"4ad34d8c-f3f3-436e-8054-e0aa221aa622\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.950786 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content\") pod \"2097e4fe-30fc-4341-90b7-14877224a474\" (UID: \"2097e4fe-30fc-4341-90b7-14877224a474\") " Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.950571 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp" (OuterVolumeSpecName: "tmp") pod "8f7e099d-81da-48a0-bf4a-c152167e8f40" (UID: "8f7e099d-81da-48a0-bf4a-c152167e8f40"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.950583 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities" (OuterVolumeSpecName: "utilities") pod "f127893b-ca79-46cf-b50d-de1d623cdc3f" (UID: "f127893b-ca79-46cf-b50d-de1d623cdc3f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.951272 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8f7e099d-81da-48a0-bf4a-c152167e8f40" (UID: "8f7e099d-81da-48a0-bf4a-c152167e8f40"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.951665 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r" (OuterVolumeSpecName: "kube-api-access-db25r") pod "f127893b-ca79-46cf-b50d-de1d623cdc3f" (UID: "f127893b-ca79-46cf-b50d-de1d623cdc3f"). InnerVolumeSpecName "kube-api-access-db25r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.952743 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb" (OuterVolumeSpecName: "kube-api-access-gfxpb") pod "2097e4fe-30fc-4341-90b7-14877224a474" (UID: "2097e4fe-30fc-4341-90b7-14877224a474"). InnerVolumeSpecName "kube-api-access-gfxpb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.952933 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities" (OuterVolumeSpecName: "utilities") pod "2097e4fe-30fc-4341-90b7-14877224a474" (UID: "2097e4fe-30fc-4341-90b7-14877224a474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.953318 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities" (OuterVolumeSpecName: "utilities") pod "4ad34d8c-f3f3-436e-8054-e0aa221aa622" (UID: "4ad34d8c-f3f3-436e-8054-e0aa221aa622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.955827 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm" (OuterVolumeSpecName: "kube-api-access-djtzm") pod "4ad34d8c-f3f3-436e-8054-e0aa221aa622" (UID: "4ad34d8c-f3f3-436e-8054-e0aa221aa622"). InnerVolumeSpecName "kube-api-access-djtzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.956465 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w" (OuterVolumeSpecName: "kube-api-access-85d9w") pod "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" (UID: "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544"). InnerVolumeSpecName "kube-api-access-85d9w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.958272 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6" (OuterVolumeSpecName: "kube-api-access-lkbg6") pod "8f7e099d-81da-48a0-bf4a-c152167e8f40" (UID: "8f7e099d-81da-48a0-bf4a-c152167e8f40"). InnerVolumeSpecName "kube-api-access-lkbg6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962391 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962420 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lkbg6\" (UniqueName: \"kubernetes.io/projected/8f7e099d-81da-48a0-bf4a-c152167e8f40-kube-api-access-lkbg6\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962431 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djtzm\" (UniqueName: \"kubernetes.io/projected/4ad34d8c-f3f3-436e-8054-e0aa221aa622-kube-api-access-djtzm\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962439 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-85d9w\" (UniqueName: \"kubernetes.io/projected/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-kube-api-access-85d9w\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962448 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962457 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfxpb\" (UniqueName: \"kubernetes.io/projected/2097e4fe-30fc-4341-90b7-14877224a474-kube-api-access-gfxpb\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962465 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962473 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962482 5113 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8f7e099d-81da-48a0-bf4a-c152167e8f40-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962491 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962499 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-db25r\" (UniqueName: \"kubernetes.io/projected/f127893b-ca79-46cf-b50d-de1d623cdc3f-kube-api-access-db25r\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.962507 5113 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f7e099d-81da-48a0-bf4a-c152167e8f40-tmp\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.983540 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" (UID: "f864d6cd-a6bf-4de0-ad26-ed72ff0c0544"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:43 crc kubenswrapper[5113]: I0121 09:23:43.996805 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f127893b-ca79-46cf-b50d-de1d623cdc3f" (UID: "f127893b-ca79-46cf-b50d-de1d623cdc3f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.037243 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2097e4fe-30fc-4341-90b7-14877224a474" (UID: "2097e4fe-30fc-4341-90b7-14877224a474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.055877 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ad34d8c-f3f3-436e-8054-e0aa221aa622" (UID: "4ad34d8c-f3f3-436e-8054-e0aa221aa622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.063326 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad34d8c-f3f3-436e-8054-e0aa221aa622-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.063360 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f127893b-ca79-46cf-b50d-de1d623cdc3f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.063369 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2097e4fe-30fc-4341-90b7-14877224a474-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.063378 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.273413 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-w69cp"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.769440 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" event={"ID":"8f7e099d-81da-48a0-bf4a-c152167e8f40","Type":"ContainerDied","Data":"afba4f48699b3420fa490240b69f82a71647a96392477e0d64fe790e41db8220"} Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.769484 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-6zxjl" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.769795 5113 scope.go:117] "RemoveContainer" containerID="3659fd5be751b585b45b3eb342f36fe90005ba36c6f95636f4eb062a039fb08a" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.772202 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkwr7" event={"ID":"4ad34d8c-f3f3-436e-8054-e0aa221aa622","Type":"ContainerDied","Data":"b657a1eedd82f3cc867ffa6927daf62a55dc71f052793cb9d43313b06e70eedf"} Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.772286 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkwr7" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.779452 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" event={"ID":"705df55e-0346-4051-a2db-cba821b3ef8c","Type":"ContainerStarted","Data":"74157f32c090f30c69db5d2455e42c1c681d24df78a59472561565593ebe87b0"} Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.779514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" event={"ID":"705df55e-0346-4051-a2db-cba821b3ef8c","Type":"ContainerStarted","Data":"ffa3cbfc1ddefe3e747c4b4b980175fb11555214b673f40f33cd5fa968a4bfc6"} Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.779531 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.783935 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.784338 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-frj7n" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.784385 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b57t5" event={"ID":"f127893b-ca79-46cf-b50d-de1d623cdc3f","Type":"ContainerDied","Data":"448d7bd75d83ba4f9f9c43312fd5e402605001378de59605d91dca149a0ffcc6"} Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.784349 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b57t5" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.785762 5113 scope.go:117] "RemoveContainer" containerID="4810fa84e93f3966973bb37e473e42788a16e3165715e1bac7a758696134cd74" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.785774 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vlng9" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.802029 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-w69cp" podStartSLOduration=1.8020115319999999 podStartE2EDuration="1.802011532s" podCreationTimestamp="2026-01-21 09:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:23:44.800158509 +0000 UTC m=+354.300985568" watchObservedRunningTime="2026-01-21 09:23:44.802011532 +0000 UTC m=+354.302838581" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.813670 5113 scope.go:117] "RemoveContainer" containerID="90e423a59f999aa4fd26273be1f19de3ea89e46cf64b57c7fc0ebce14bce913e" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.852831 5113 scope.go:117] "RemoveContainer" containerID="1d8c4e50c1861c0c3d9d7070cbdddcf67a352d3272390056df2d267ee1f856d0" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.856699 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.860164 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkwr7"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.869486 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.876542 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-frj7n"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.887266 5113 scope.go:117] "RemoveContainer" containerID="e076047b8ae1d88074199f9aec38b44fa0d755743baceed407122b4743f17e31" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.888290 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.902170 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-6zxjl"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.902260 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.903010 5113 scope.go:117] "RemoveContainer" containerID="180a45a5e6ed638264837aa8986d9cfff2c660f20ecd30446dd49b0aeef36b41" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.907328 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b57t5"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.920069 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.922928 5113 scope.go:117] "RemoveContainer" containerID="4e216f63e60269daacf454363e9711f37bf62aaff82b102b08e726c060407aae" Jan 21 09:23:44 crc kubenswrapper[5113]: I0121 09:23:44.923582 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vlng9"] Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.580656 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581286 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581311 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581324 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581331 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581342 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581349 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581366 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581375 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581390 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581396 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581404 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581410 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581419 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581425 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581436 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581443 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581452 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581458 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581476 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581482 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581490 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581496 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="extract-utilities" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581504 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581510 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581523 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581531 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="extract-content" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581622 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581634 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581647 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581657 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581667 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="2097e4fe-30fc-4341-90b7-14877224a474" containerName="registry-server" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581785 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581795 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.581897 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" containerName="marketplace-operator" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.591049 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.593249 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.594107 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.787953 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nsgd7"] Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.788411 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzqk\" (UniqueName: \"kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.789475 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.789566 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.808112 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nsgd7"] Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.808568 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.815651 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.891238 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzqk\" (UniqueName: \"kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.891334 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.891444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.891767 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.892148 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.926359 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzqk\" (UniqueName: \"kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk\") pod \"redhat-marketplace-vgn4p\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.992781 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9drvl\" (UniqueName: \"kubernetes.io/projected/8e56cc9d-707d-40d1-9ea5-29233a2270ea-kube-api-access-9drvl\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.992858 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-catalog-content\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:45 crc kubenswrapper[5113]: I0121 09:23:45.992979 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-utilities\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.094319 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9drvl\" (UniqueName: \"kubernetes.io/projected/8e56cc9d-707d-40d1-9ea5-29233a2270ea-kube-api-access-9drvl\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.094390 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-catalog-content\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.094448 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-utilities\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.095202 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-utilities\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.096013 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e56cc9d-707d-40d1-9ea5-29233a2270ea-catalog-content\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.113194 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9drvl\" (UniqueName: \"kubernetes.io/projected/8e56cc9d-707d-40d1-9ea5-29233a2270ea-kube-api-access-9drvl\") pod \"certified-operators-nsgd7\" (UID: \"8e56cc9d-707d-40d1-9ea5-29233a2270ea\") " pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.126811 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.217312 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.329851 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nsgd7"] Jan 21 09:23:46 crc kubenswrapper[5113]: W0121 09:23:46.339553 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e56cc9d_707d_40d1_9ea5_29233a2270ea.slice/crio-4d4ce7cea73971fe273b2c4608b3316080b0899c24f6bb6b84ea4837279ee1a8 WatchSource:0}: Error finding container 4d4ce7cea73971fe273b2c4608b3316080b0899c24f6bb6b84ea4837279ee1a8: Status 404 returned error can't find the container with id 4d4ce7cea73971fe273b2c4608b3316080b0899c24f6bb6b84ea4837279ee1a8 Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.432918 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.832316 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerStarted","Data":"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740"} Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.832595 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerStarted","Data":"46b40d05b26d3d86cac52545686f98e5db6498b1c0d5d7957e17e91ab1665341"} Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.837658 5113 generic.go:358] "Generic (PLEG): container finished" podID="8e56cc9d-707d-40d1-9ea5-29233a2270ea" containerID="a3a4b19e9ac89a363eeb09d48b40beedc1ad046d0502e6117cc24070003958c8" exitCode=0 Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.837871 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nsgd7" event={"ID":"8e56cc9d-707d-40d1-9ea5-29233a2270ea","Type":"ContainerDied","Data":"a3a4b19e9ac89a363eeb09d48b40beedc1ad046d0502e6117cc24070003958c8"} Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.837927 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nsgd7" event={"ID":"8e56cc9d-707d-40d1-9ea5-29233a2270ea","Type":"ContainerStarted","Data":"4d4ce7cea73971fe273b2c4608b3316080b0899c24f6bb6b84ea4837279ee1a8"} Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.870701 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2097e4fe-30fc-4341-90b7-14877224a474" path="/var/lib/kubelet/pods/2097e4fe-30fc-4341-90b7-14877224a474/volumes" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.871919 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad34d8c-f3f3-436e-8054-e0aa221aa622" path="/var/lib/kubelet/pods/4ad34d8c-f3f3-436e-8054-e0aa221aa622/volumes" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.873251 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7e099d-81da-48a0-bf4a-c152167e8f40" path="/var/lib/kubelet/pods/8f7e099d-81da-48a0-bf4a-c152167e8f40/volumes" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.874957 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f127893b-ca79-46cf-b50d-de1d623cdc3f" path="/var/lib/kubelet/pods/f127893b-ca79-46cf-b50d-de1d623cdc3f/volumes" Jan 21 09:23:46 crc kubenswrapper[5113]: I0121 09:23:46.876225 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f864d6cd-a6bf-4de0-ad26-ed72ff0c0544" path="/var/lib/kubelet/pods/f864d6cd-a6bf-4de0-ad26-ed72ff0c0544/volumes" Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.845688 5113 generic.go:358] "Generic (PLEG): container finished" podID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerID="bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740" exitCode=0 Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.845725 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerDied","Data":"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740"} Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.846134 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerStarted","Data":"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18"} Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.848608 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nsgd7" event={"ID":"8e56cc9d-707d-40d1-9ea5-29233a2270ea","Type":"ContainerStarted","Data":"5d70620c1bd558bee29109c5c3a37862839524a5b440a413366ed519d2f5d9ae"} Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.981310 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wh5g8"] Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.989438 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.992310 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 21 09:23:47 crc kubenswrapper[5113]: I0121 09:23:47.995235 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wh5g8"] Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.016854 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hgm\" (UniqueName: \"kubernetes.io/projected/46b7f551-fb42-475f-8c13-7810f0eed33e-kube-api-access-98hgm\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.016902 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-catalog-content\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.016946 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-utilities\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.119572 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-utilities\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.119678 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-98hgm\" (UniqueName: \"kubernetes.io/projected/46b7f551-fb42-475f-8c13-7810f0eed33e-kube-api-access-98hgm\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.119702 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-catalog-content\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.120353 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-catalog-content\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.120362 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46b7f551-fb42-475f-8c13-7810f0eed33e-utilities\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.152065 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hgm\" (UniqueName: \"kubernetes.io/projected/46b7f551-fb42-475f-8c13-7810f0eed33e-kube-api-access-98hgm\") pod \"redhat-operators-wh5g8\" (UID: \"46b7f551-fb42-475f-8c13-7810f0eed33e\") " pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.185218 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5jhqk"] Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.194413 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.195701 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jhqk"] Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.196043 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.222559 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7585w\" (UniqueName: \"kubernetes.io/projected/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-kube-api-access-7585w\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.222707 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-utilities\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.222832 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-catalog-content\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.321568 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-skk4w"] Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.323570 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-utilities\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.323618 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-catalog-content\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.323642 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7585w\" (UniqueName: \"kubernetes.io/projected/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-kube-api-access-7585w\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.324106 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-catalog-content\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.324192 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-utilities\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.325637 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.346542 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-skk4w"] Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.349204 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7585w\" (UniqueName: \"kubernetes.io/projected/0fc5a81a-2441-41ce-9c03-99533e7c0fc5-kube-api-access-7585w\") pod \"community-operators-5jhqk\" (UID: \"0fc5a81a-2441-41ce-9c03-99533e7c0fc5\") " pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.349717 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.525805 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z28\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-kube-api-access-k9z28\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526179 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aad172de-b4a9-410a-915f-26aacd86b60a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526207 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-trusted-ca\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526270 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526353 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526395 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-registry-tls\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526441 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-registry-certificates\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.526465 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aad172de-b4a9-410a-915f-26aacd86b60a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.527863 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.553321 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.560239 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wh5g8"] Jan 21 09:23:48 crc kubenswrapper[5113]: W0121 09:23:48.574141 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46b7f551_fb42_475f_8c13_7810f0eed33e.slice/crio-97e59cbcd66d4ecb39baa5b7dae1db271f3fcc94f9a7093158d04ba044c0de32 WatchSource:0}: Error finding container 97e59cbcd66d4ecb39baa5b7dae1db271f3fcc94f9a7093158d04ba044c0de32: Status 404 returned error can't find the container with id 97e59cbcd66d4ecb39baa5b7dae1db271f3fcc94f9a7093158d04ba044c0de32 Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628130 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628186 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-registry-tls\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628226 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-registry-certificates\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628255 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aad172de-b4a9-410a-915f-26aacd86b60a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628302 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z28\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-kube-api-access-k9z28\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628327 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aad172de-b4a9-410a-915f-26aacd86b60a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.628350 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-trusted-ca\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.629300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aad172de-b4a9-410a-915f-26aacd86b60a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.629802 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-trusted-ca\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.630097 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aad172de-b4a9-410a-915f-26aacd86b60a-registry-certificates\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.633774 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-registry-tls\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.633862 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aad172de-b4a9-410a-915f-26aacd86b60a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.649457 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.650909 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z28\" (UniqueName: \"kubernetes.io/projected/aad172de-b4a9-410a-915f-26aacd86b60a-kube-api-access-k9z28\") pod \"image-registry-5d9d95bf5b-skk4w\" (UID: \"aad172de-b4a9-410a-915f-26aacd86b60a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.863496 5113 generic.go:358] "Generic (PLEG): container finished" podID="46b7f551-fb42-475f-8c13-7810f0eed33e" containerID="1c1991d71a4561b18b4838072807569eea071b13c1dcfbb2e86d656ac968d22f" exitCode=0 Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.863567 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh5g8" event={"ID":"46b7f551-fb42-475f-8c13-7810f0eed33e","Type":"ContainerDied","Data":"1c1991d71a4561b18b4838072807569eea071b13c1dcfbb2e86d656ac968d22f"} Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.863883 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh5g8" event={"ID":"46b7f551-fb42-475f-8c13-7810f0eed33e","Type":"ContainerStarted","Data":"97e59cbcd66d4ecb39baa5b7dae1db271f3fcc94f9a7093158d04ba044c0de32"} Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.865425 5113 generic.go:358] "Generic (PLEG): container finished" podID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerID="43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18" exitCode=0 Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.865514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerDied","Data":"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18"} Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.865560 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerStarted","Data":"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072"} Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.870542 5113 generic.go:358] "Generic (PLEG): container finished" podID="8e56cc9d-707d-40d1-9ea5-29233a2270ea" containerID="5d70620c1bd558bee29109c5c3a37862839524a5b440a413366ed519d2f5d9ae" exitCode=0 Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.870662 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nsgd7" event={"ID":"8e56cc9d-707d-40d1-9ea5-29233a2270ea","Type":"ContainerDied","Data":"5d70620c1bd558bee29109c5c3a37862839524a5b440a413366ed519d2f5d9ae"} Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.909039 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vgn4p" podStartSLOduration=3.35409119 podStartE2EDuration="3.909017278s" podCreationTimestamp="2026-01-21 09:23:45 +0000 UTC" firstStartedPulling="2026-01-21 09:23:46.833425377 +0000 UTC m=+356.334252456" lastFinishedPulling="2026-01-21 09:23:47.388351465 +0000 UTC m=+356.889178544" observedRunningTime="2026-01-21 09:23:48.89750545 +0000 UTC m=+358.398332529" watchObservedRunningTime="2026-01-21 09:23:48.909017278 +0000 UTC m=+358.409844347" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.938592 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:48 crc kubenswrapper[5113]: I0121 09:23:48.976712 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5jhqk"] Jan 21 09:23:48 crc kubenswrapper[5113]: W0121 09:23:48.985614 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fc5a81a_2441_41ce_9c03_99533e7c0fc5.slice/crio-0a409da75b23f1e7308ba1c9cf4610383d18713154d8854a9a3ff09a004edd56 WatchSource:0}: Error finding container 0a409da75b23f1e7308ba1c9cf4610383d18713154d8854a9a3ff09a004edd56: Status 404 returned error can't find the container with id 0a409da75b23f1e7308ba1c9cf4610383d18713154d8854a9a3ff09a004edd56 Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.145198 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-skk4w"] Jan 21 09:23:49 crc kubenswrapper[5113]: W0121 09:23:49.163560 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaad172de_b4a9_410a_915f_26aacd86b60a.slice/crio-7e1b38c2d30646c4464bd9dfe3fa2dceaca396dcb6f7a81bf37d7883d12220c9 WatchSource:0}: Error finding container 7e1b38c2d30646c4464bd9dfe3fa2dceaca396dcb6f7a81bf37d7883d12220c9: Status 404 returned error can't find the container with id 7e1b38c2d30646c4464bd9dfe3fa2dceaca396dcb6f7a81bf37d7883d12220c9 Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.878787 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh5g8" event={"ID":"46b7f551-fb42-475f-8c13-7810f0eed33e","Type":"ContainerStarted","Data":"ff80845addf1accf2b6db7b401c5a88bb84987b587a03590ddc07708f726ad27"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.880622 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" event={"ID":"aad172de-b4a9-410a-915f-26aacd86b60a","Type":"ContainerStarted","Data":"75795ff5a3713234869a9edb9add50bfcc582d96df246b7f4012099f0c005c72"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.880660 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" event={"ID":"aad172de-b4a9-410a-915f-26aacd86b60a","Type":"ContainerStarted","Data":"7e1b38c2d30646c4464bd9dfe3fa2dceaca396dcb6f7a81bf37d7883d12220c9"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.880761 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.883988 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nsgd7" event={"ID":"8e56cc9d-707d-40d1-9ea5-29233a2270ea","Type":"ContainerStarted","Data":"f9f3abf4d29de694da04f35f8faaf57facd0acc239fcb0f134722b75928c4dd9"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.885773 5113 generic.go:358] "Generic (PLEG): container finished" podID="0fc5a81a-2441-41ce-9c03-99533e7c0fc5" containerID="990bec755c6946fc111fc891e5ec628232a8e5299044fd2bd5136f90b61027aa" exitCode=0 Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.885844 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jhqk" event={"ID":"0fc5a81a-2441-41ce-9c03-99533e7c0fc5","Type":"ContainerDied","Data":"990bec755c6946fc111fc891e5ec628232a8e5299044fd2bd5136f90b61027aa"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.885895 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jhqk" event={"ID":"0fc5a81a-2441-41ce-9c03-99533e7c0fc5","Type":"ContainerStarted","Data":"0a409da75b23f1e7308ba1c9cf4610383d18713154d8854a9a3ff09a004edd56"} Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.913399 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" podStartSLOduration=1.913379302 podStartE2EDuration="1.913379302s" podCreationTimestamp="2026-01-21 09:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:23:49.912226029 +0000 UTC m=+359.413053078" watchObservedRunningTime="2026-01-21 09:23:49.913379302 +0000 UTC m=+359.414206361" Jan 21 09:23:49 crc kubenswrapper[5113]: I0121 09:23:49.944986 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nsgd7" podStartSLOduration=4.374939552 podStartE2EDuration="4.944971441s" podCreationTimestamp="2026-01-21 09:23:45 +0000 UTC" firstStartedPulling="2026-01-21 09:23:46.839050147 +0000 UTC m=+356.339877226" lastFinishedPulling="2026-01-21 09:23:47.409082066 +0000 UTC m=+356.909909115" observedRunningTime="2026-01-21 09:23:49.94282465 +0000 UTC m=+359.443651719" watchObservedRunningTime="2026-01-21 09:23:49.944971441 +0000 UTC m=+359.445798490" Jan 21 09:23:50 crc kubenswrapper[5113]: I0121 09:23:50.894806 5113 generic.go:358] "Generic (PLEG): container finished" podID="46b7f551-fb42-475f-8c13-7810f0eed33e" containerID="ff80845addf1accf2b6db7b401c5a88bb84987b587a03590ddc07708f726ad27" exitCode=0 Jan 21 09:23:50 crc kubenswrapper[5113]: I0121 09:23:50.894887 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh5g8" event={"ID":"46b7f551-fb42-475f-8c13-7810f0eed33e","Type":"ContainerDied","Data":"ff80845addf1accf2b6db7b401c5a88bb84987b587a03590ddc07708f726ad27"} Jan 21 09:23:51 crc kubenswrapper[5113]: I0121 09:23:51.909534 5113 generic.go:358] "Generic (PLEG): container finished" podID="0fc5a81a-2441-41ce-9c03-99533e7c0fc5" containerID="d98a59f58a0cacc7f09bac056305c910ed4ff2966de5a9736519b1d6cd4ff53e" exitCode=0 Jan 21 09:23:51 crc kubenswrapper[5113]: I0121 09:23:51.909645 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jhqk" event={"ID":"0fc5a81a-2441-41ce-9c03-99533e7c0fc5","Type":"ContainerDied","Data":"d98a59f58a0cacc7f09bac056305c910ed4ff2966de5a9736519b1d6cd4ff53e"} Jan 21 09:23:51 crc kubenswrapper[5113]: I0121 09:23:51.913281 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh5g8" event={"ID":"46b7f551-fb42-475f-8c13-7810f0eed33e","Type":"ContainerStarted","Data":"7eb94eaf77e7eca134bb25975e9e1ce72cd96f56ba2d32315d2422cae9fc24ec"} Jan 21 09:23:51 crc kubenswrapper[5113]: I0121 09:23:51.950148 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wh5g8" podStartSLOduration=4.211320714 podStartE2EDuration="4.950131088s" podCreationTimestamp="2026-01-21 09:23:47 +0000 UTC" firstStartedPulling="2026-01-21 09:23:48.865113538 +0000 UTC m=+358.365940597" lastFinishedPulling="2026-01-21 09:23:49.603923912 +0000 UTC m=+359.104750971" observedRunningTime="2026-01-21 09:23:51.945230258 +0000 UTC m=+361.446057307" watchObservedRunningTime="2026-01-21 09:23:51.950131088 +0000 UTC m=+361.450958127" Jan 21 09:23:52 crc kubenswrapper[5113]: I0121 09:23:52.920223 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5jhqk" event={"ID":"0fc5a81a-2441-41ce-9c03-99533e7c0fc5","Type":"ContainerStarted","Data":"d884057d51b4189db5c90d9bac1460fdf29d3132386bbebab9e44d51a0e5c08c"} Jan 21 09:23:52 crc kubenswrapper[5113]: I0121 09:23:52.938358 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5jhqk" podStartSLOduration=4.052383629 podStartE2EDuration="4.938337822s" podCreationTimestamp="2026-01-21 09:23:48 +0000 UTC" firstStartedPulling="2026-01-21 09:23:49.886424025 +0000 UTC m=+359.387251074" lastFinishedPulling="2026-01-21 09:23:50.772378208 +0000 UTC m=+360.273205267" observedRunningTime="2026-01-21 09:23:52.934169643 +0000 UTC m=+362.434996692" watchObservedRunningTime="2026-01-21 09:23:52.938337822 +0000 UTC m=+362.439164871" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.127868 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.128448 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.168601 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.218385 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.218433 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.263586 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.991941 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nsgd7" Jan 21 09:23:56 crc kubenswrapper[5113]: I0121 09:23:56.994481 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.350975 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.351367 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.406328 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.528704 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.529031 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:58 crc kubenswrapper[5113]: I0121 09:23:58.595479 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:23:59 crc kubenswrapper[5113]: I0121 09:23:59.018700 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wh5g8" Jan 21 09:23:59 crc kubenswrapper[5113]: I0121 09:23:59.024156 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5jhqk" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.166934 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483124-xtdgv"] Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.600134 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483124-xtdgv"] Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.600247 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.602288 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.602995 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.603322 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.695970 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzx7r\" (UniqueName: \"kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r\") pod \"auto-csr-approver-29483124-xtdgv\" (UID: \"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894\") " pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.797016 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dzx7r\" (UniqueName: \"kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r\") pod \"auto-csr-approver-29483124-xtdgv\" (UID: \"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894\") " pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.823414 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzx7r\" (UniqueName: \"kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r\") pod \"auto-csr-approver-29483124-xtdgv\" (UID: \"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894\") " pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:00 crc kubenswrapper[5113]: I0121 09:24:00.916246 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:01 crc kubenswrapper[5113]: I0121 09:24:01.103332 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483124-xtdgv"] Jan 21 09:24:01 crc kubenswrapper[5113]: W0121 09:24:01.115256 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16a2bb76_5c6a_4cbb_a2bb_6a6cc8687894.slice/crio-09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a WatchSource:0}: Error finding container 09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a: Status 404 returned error can't find the container with id 09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a Jan 21 09:24:01 crc kubenswrapper[5113]: I0121 09:24:01.974452 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" event={"ID":"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894","Type":"ContainerStarted","Data":"09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a"} Jan 21 09:24:06 crc kubenswrapper[5113]: I0121 09:24:06.000352 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" event={"ID":"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894","Type":"ContainerStarted","Data":"cdca3673f2e37f84a3b4b47d9ed6475d580ad0ee5a65c8594f48ee96506caed4"} Jan 21 09:24:06 crc kubenswrapper[5113]: I0121 09:24:06.019272 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" podStartSLOduration=1.403539276 podStartE2EDuration="6.019249612s" podCreationTimestamp="2026-01-21 09:24:00 +0000 UTC" firstStartedPulling="2026-01-21 09:24:01.120634511 +0000 UTC m=+370.621461560" lastFinishedPulling="2026-01-21 09:24:05.736344837 +0000 UTC m=+375.237171896" observedRunningTime="2026-01-21 09:24:06.012724746 +0000 UTC m=+375.513551795" watchObservedRunningTime="2026-01-21 09:24:06.019249612 +0000 UTC m=+375.520076691" Jan 21 09:24:06 crc kubenswrapper[5113]: I0121 09:24:06.338015 5113 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-697bx" Jan 21 09:24:06 crc kubenswrapper[5113]: I0121 09:24:06.360677 5113 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-697bx" Jan 21 09:24:07 crc kubenswrapper[5113]: I0121 09:24:07.006217 5113 generic.go:358] "Generic (PLEG): container finished" podID="16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" containerID="cdca3673f2e37f84a3b4b47d9ed6475d580ad0ee5a65c8594f48ee96506caed4" exitCode=0 Jan 21 09:24:07 crc kubenswrapper[5113]: I0121 09:24:07.006303 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" event={"ID":"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894","Type":"ContainerDied","Data":"cdca3673f2e37f84a3b4b47d9ed6475d580ad0ee5a65c8594f48ee96506caed4"} Jan 21 09:24:07 crc kubenswrapper[5113]: I0121 09:24:07.361777 5113 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 09:19:06 +0000 UTC" deadline="2026-02-12 03:36:33.74588127 +0000 UTC" Jan 21 09:24:07 crc kubenswrapper[5113]: I0121 09:24:07.362078 5113 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="522h12m26.383809298s" Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.240714 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.309819 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzx7r\" (UniqueName: \"kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r\") pod \"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894\" (UID: \"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894\") " Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.315982 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r" (OuterVolumeSpecName: "kube-api-access-dzx7r") pod "16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" (UID: "16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894"). InnerVolumeSpecName "kube-api-access-dzx7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.363185 5113 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-20 09:19:06 +0000 UTC" deadline="2026-02-15 23:35:00.438904587 +0000 UTC" Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.363421 5113 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="614h10m52.075486166s" Jan 21 09:24:08 crc kubenswrapper[5113]: I0121 09:24:08.411428 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzx7r\" (UniqueName: \"kubernetes.io/projected/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894-kube-api-access-dzx7r\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:09 crc kubenswrapper[5113]: I0121 09:24:09.017480 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" Jan 21 09:24:09 crc kubenswrapper[5113]: I0121 09:24:09.017495 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483124-xtdgv" event={"ID":"16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894","Type":"ContainerDied","Data":"09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a"} Jan 21 09:24:09 crc kubenswrapper[5113]: I0121 09:24:09.017538 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09181e80d77c55232ae894d4abd87c84e8edccf69747a5c98e45e0487155876a" Jan 21 09:24:10 crc kubenswrapper[5113]: I0121 09:24:10.909906 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-skk4w" Jan 21 09:24:10 crc kubenswrapper[5113]: I0121 09:24:10.967555 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:24:28 crc kubenswrapper[5113]: I0121 09:24:28.340000 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:24:28 crc kubenswrapper[5113]: I0121 09:24:28.340567 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.013684 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" podUID="fcd945f6-07b1-46f0-9c38-69d04075b569" containerName="registry" containerID="cri-o://0152125a188ded39defe1f81c241c5738510ef521a6e9a9732404236d8b81def" gracePeriod=30 Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.225389 5113 generic.go:358] "Generic (PLEG): container finished" podID="fcd945f6-07b1-46f0-9c38-69d04075b569" containerID="0152125a188ded39defe1f81c241c5738510ef521a6e9a9732404236d8b81def" exitCode=0 Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.225479 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" event={"ID":"fcd945f6-07b1-46f0-9c38-69d04075b569","Type":"ContainerDied","Data":"0152125a188ded39defe1f81c241c5738510ef521a6e9a9732404236d8b81def"} Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.408574 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.524007 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztqpt\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.524161 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.525802 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.525898 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.525943 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.526097 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.526146 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.526201 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.526963 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.527323 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.534175 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.534582 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.535541 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt" (OuterVolumeSpecName: "kube-api-access-ztqpt") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "kube-api-access-ztqpt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.536972 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: E0121 09:24:36.537489 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:fcd945f6-07b1-46f0-9c38-69d04075b569 nodeName:}" failed. No retries permitted until 2026-01-21 09:24:37.037440873 +0000 UTC m=+406.538267922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "registry-storage" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.544999 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628135 5113 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628169 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztqpt\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-kube-api-access-ztqpt\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628182 5113 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fcd945f6-07b1-46f0-9c38-69d04075b569-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628192 5113 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628203 5113 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628213 5113 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fcd945f6-07b1-46f0-9c38-69d04075b569-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:36 crc kubenswrapper[5113]: I0121 09:24:36.628223 5113 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcd945f6-07b1-46f0-9c38-69d04075b569-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.135183 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"fcd945f6-07b1-46f0-9c38-69d04075b569\" (UID: \"fcd945f6-07b1-46f0-9c38-69d04075b569\") " Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.146412 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "fcd945f6-07b1-46f0-9c38-69d04075b569" (UID: "fcd945f6-07b1-46f0-9c38-69d04075b569"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.237996 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" event={"ID":"fcd945f6-07b1-46f0-9c38-69d04075b569","Type":"ContainerDied","Data":"b58ef55b2ae3331fc304092b986e44a194dedbc04a09873ddbb6ae37425b456a"} Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.238090 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-wt9pn" Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.238076 5113 scope.go:117] "RemoveContainer" containerID="0152125a188ded39defe1f81c241c5738510ef521a6e9a9732404236d8b81def" Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.285433 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:24:37 crc kubenswrapper[5113]: I0121 09:24:37.298336 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-wt9pn"] Jan 21 09:24:38 crc kubenswrapper[5113]: I0121 09:24:38.856178 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd945f6-07b1-46f0-9c38-69d04075b569" path="/var/lib/kubelet/pods/fcd945f6-07b1-46f0-9c38-69d04075b569/volumes" Jan 21 09:24:58 crc kubenswrapper[5113]: I0121 09:24:58.340299 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:24:58 crc kubenswrapper[5113]: I0121 09:24:58.341132 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.340170 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.340935 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.340999 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.341891 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.342033 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248" gracePeriod=600 Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.598689 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248" exitCode=0 Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.598778 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248"} Jan 21 09:25:28 crc kubenswrapper[5113]: I0121 09:25:28.598867 5113 scope.go:117] "RemoveContainer" containerID="d5e35405358954b4e44654baa2fb5a0a4140312ae1ab9e63625c319c1fc7a9a7" Jan 21 09:25:29 crc kubenswrapper[5113]: I0121 09:25:29.609335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505"} Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.147655 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483126-2lhx7"] Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149350 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" containerName="oc" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149375 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" containerName="oc" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149417 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fcd945f6-07b1-46f0-9c38-69d04075b569" containerName="registry" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149431 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd945f6-07b1-46f0-9c38-69d04075b569" containerName="registry" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149603 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="fcd945f6-07b1-46f0-9c38-69d04075b569" containerName="registry" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.149626 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" containerName="oc" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.161461 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483126-2lhx7"] Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.161685 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.164766 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.165464 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.174456 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.211573 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcrw\" (UniqueName: \"kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw\") pod \"auto-csr-approver-29483126-2lhx7\" (UID: \"5795524d-b047-4cd0-a10c-8b945809822a\") " pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.313024 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcrw\" (UniqueName: \"kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw\") pod \"auto-csr-approver-29483126-2lhx7\" (UID: \"5795524d-b047-4cd0-a10c-8b945809822a\") " pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.354205 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcrw\" (UniqueName: \"kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw\") pod \"auto-csr-approver-29483126-2lhx7\" (UID: \"5795524d-b047-4cd0-a10c-8b945809822a\") " pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.490879 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.734460 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483126-2lhx7"] Jan 21 09:26:00 crc kubenswrapper[5113]: I0121 09:26:00.859216 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" event={"ID":"5795524d-b047-4cd0-a10c-8b945809822a","Type":"ContainerStarted","Data":"6c50b0f846123e68ca83e56b02cd7cd8a4d673f8ca540bbe99de8fbe71d4dc61"} Jan 21 09:26:02 crc kubenswrapper[5113]: I0121 09:26:02.871337 5113 generic.go:358] "Generic (PLEG): container finished" podID="5795524d-b047-4cd0-a10c-8b945809822a" containerID="1e3ebfc9b348005d6f99ce8a6cd1328da82abcb05888f749bac5bf99b2aea168" exitCode=0 Jan 21 09:26:02 crc kubenswrapper[5113]: I0121 09:26:02.871408 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" event={"ID":"5795524d-b047-4cd0-a10c-8b945809822a","Type":"ContainerDied","Data":"1e3ebfc9b348005d6f99ce8a6cd1328da82abcb05888f749bac5bf99b2aea168"} Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.201189 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.268300 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcrw\" (UniqueName: \"kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw\") pod \"5795524d-b047-4cd0-a10c-8b945809822a\" (UID: \"5795524d-b047-4cd0-a10c-8b945809822a\") " Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.287960 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw" (OuterVolumeSpecName: "kube-api-access-lpcrw") pod "5795524d-b047-4cd0-a10c-8b945809822a" (UID: "5795524d-b047-4cd0-a10c-8b945809822a"). InnerVolumeSpecName "kube-api-access-lpcrw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.369629 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lpcrw\" (UniqueName: \"kubernetes.io/projected/5795524d-b047-4cd0-a10c-8b945809822a-kube-api-access-lpcrw\") on node \"crc\" DevicePath \"\"" Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.890020 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" event={"ID":"5795524d-b047-4cd0-a10c-8b945809822a","Type":"ContainerDied","Data":"6c50b0f846123e68ca83e56b02cd7cd8a4d673f8ca540bbe99de8fbe71d4dc61"} Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.890060 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c50b0f846123e68ca83e56b02cd7cd8a4d673f8ca540bbe99de8fbe71d4dc61" Jan 21 09:26:04 crc kubenswrapper[5113]: I0121 09:26:04.890083 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483126-2lhx7" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.145175 5113 scope.go:117] "RemoveContainer" containerID="a1b545b4ef756a2a4e15460361651faf9ce3d1c59328292f061b590256135999" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.180040 5113 scope.go:117] "RemoveContainer" containerID="428ba9dd346a511bf265107b1cfdbcb274364984449155e2e7682b2a21be7644" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.196605 5113 scope.go:117] "RemoveContainer" containerID="c3427ad5e58b17f0609fb09970af3806f510884844b707dc6288249767dfa772" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.212840 5113 scope.go:117] "RemoveContainer" containerID="c45436242979b264329436e527a69d053d45072b5b97a79ab3c727d8fb9a9297" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.235617 5113 scope.go:117] "RemoveContainer" containerID="1d5a01f414be232877a258ab4dfa74cd9b9537aee36fe9357ad6dec1121afe04" Jan 21 09:26:51 crc kubenswrapper[5113]: I0121 09:26:51.260532 5113 scope.go:117] "RemoveContainer" containerID="9cefbe9377e0f7238f8953e10255c030ba6b72124d88fcc772d330a545234a0e" Jan 21 09:27:28 crc kubenswrapper[5113]: I0121 09:27:28.340551 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:27:28 crc kubenswrapper[5113]: I0121 09:27:28.341650 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:27:51 crc kubenswrapper[5113]: I0121 09:27:51.168014 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:27:51 crc kubenswrapper[5113]: I0121 09:27:51.171891 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:27:58 crc kubenswrapper[5113]: I0121 09:27:58.340203 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:27:58 crc kubenswrapper[5113]: I0121 09:27:58.341046 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.153387 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483128-8rjfv"] Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.154602 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5795524d-b047-4cd0-a10c-8b945809822a" containerName="oc" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.154650 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5795524d-b047-4cd0-a10c-8b945809822a" containerName="oc" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.154934 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5795524d-b047-4cd0-a10c-8b945809822a" containerName="oc" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.169571 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483128-8rjfv"] Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.169824 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.172507 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.173463 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.174073 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.301647 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgvl\" (UniqueName: \"kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl\") pod \"auto-csr-approver-29483128-8rjfv\" (UID: \"8510340e-32f0-4f11-82c2-d57eed3356be\") " pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.403224 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgvl\" (UniqueName: \"kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl\") pod \"auto-csr-approver-29483128-8rjfv\" (UID: \"8510340e-32f0-4f11-82c2-d57eed3356be\") " pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.436783 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgvl\" (UniqueName: \"kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl\") pod \"auto-csr-approver-29483128-8rjfv\" (UID: \"8510340e-32f0-4f11-82c2-d57eed3356be\") " pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.500325 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:00 crc kubenswrapper[5113]: I0121 09:28:00.800004 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483128-8rjfv"] Jan 21 09:28:01 crc kubenswrapper[5113]: I0121 09:28:01.745121 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" event={"ID":"8510340e-32f0-4f11-82c2-d57eed3356be","Type":"ContainerStarted","Data":"d5392f65aa1effd36e0711f18abad5d1b31f8f6a3a3227186840ea49a424543e"} Jan 21 09:28:02 crc kubenswrapper[5113]: I0121 09:28:02.755872 5113 generic.go:358] "Generic (PLEG): container finished" podID="8510340e-32f0-4f11-82c2-d57eed3356be" containerID="f3bf36e9ef03df536d13938b86760024f75ebfd10aced4b4e29f9df8a9970b7d" exitCode=0 Jan 21 09:28:02 crc kubenswrapper[5113]: I0121 09:28:02.756074 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" event={"ID":"8510340e-32f0-4f11-82c2-d57eed3356be","Type":"ContainerDied","Data":"f3bf36e9ef03df536d13938b86760024f75ebfd10aced4b4e29f9df8a9970b7d"} Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.012592 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.155265 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjgvl\" (UniqueName: \"kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl\") pod \"8510340e-32f0-4f11-82c2-d57eed3356be\" (UID: \"8510340e-32f0-4f11-82c2-d57eed3356be\") " Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.164923 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl" (OuterVolumeSpecName: "kube-api-access-wjgvl") pod "8510340e-32f0-4f11-82c2-d57eed3356be" (UID: "8510340e-32f0-4f11-82c2-d57eed3356be"). InnerVolumeSpecName "kube-api-access-wjgvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.256621 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wjgvl\" (UniqueName: \"kubernetes.io/projected/8510340e-32f0-4f11-82c2-d57eed3356be-kube-api-access-wjgvl\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.772434 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" event={"ID":"8510340e-32f0-4f11-82c2-d57eed3356be","Type":"ContainerDied","Data":"d5392f65aa1effd36e0711f18abad5d1b31f8f6a3a3227186840ea49a424543e"} Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.772885 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5392f65aa1effd36e0711f18abad5d1b31f8f6a3a3227186840ea49a424543e" Jan 21 09:28:04 crc kubenswrapper[5113]: I0121 09:28:04.772494 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483128-8rjfv" Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.340011 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.340666 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.340722 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.341397 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.341464 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505" gracePeriod=600 Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.471414 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.943373 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505" exitCode=0 Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.943440 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505"} Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.944257 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0"} Jan 21 09:28:28 crc kubenswrapper[5113]: I0121 09:28:28.944292 5113 scope.go:117] "RemoveContainer" containerID="fcba58102e4ffc568fba5db5a32bae6eab170fa71ae03eed6be1f8584029c248" Jan 21 09:28:53 crc kubenswrapper[5113]: I0121 09:28:53.897253 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk"] Jan 21 09:28:53 crc kubenswrapper[5113]: I0121 09:28:53.898386 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="kube-rbac-proxy" containerID="cri-o://be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" gracePeriod=30 Jan 21 09:28:53 crc kubenswrapper[5113]: I0121 09:28:53.898460 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="ovnkube-cluster-manager" containerID="cri-o://3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089269 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgkx4"] Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089696 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-controller" containerID="cri-o://5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089804 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="northd" containerID="cri-o://2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089847 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-node" containerID="cri-o://4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089921 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="nbdb" containerID="cri-o://97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089871 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-acl-logging" containerID="cri-o://9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089957 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.089976 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="sbdb" containerID="cri-o://646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.118604 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovnkube-controller" containerID="cri-o://4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" gracePeriod=30 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.120524 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127353 5113 generic.go:358] "Generic (PLEG): container finished" podID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerID="3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" exitCode=0 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127374 5113 generic.go:358] "Generic (PLEG): container finished" podID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerID="be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" exitCode=0 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127546 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerDied","Data":"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715"} Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127574 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerDied","Data":"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0"} Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127586 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" event={"ID":"27afa170-d0be-48dd-a0d6-02a747bb8e63","Type":"ContainerDied","Data":"9b9f907156f8c01822a6be54ea96a5b1d6ed2f59ffc8c6150e8019af22a08ac6"} Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127603 5113 scope.go:117] "RemoveContainer" containerID="3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.127770 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151133 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv"] Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151631 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8510340e-32f0-4f11-82c2-d57eed3356be" containerName="oc" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151651 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="8510340e-32f0-4f11-82c2-d57eed3356be" containerName="oc" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151670 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="ovnkube-cluster-manager" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151677 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="ovnkube-cluster-manager" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151699 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="kube-rbac-proxy" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151707 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="kube-rbac-proxy" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151797 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="ovnkube-cluster-manager" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151808 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" containerName="kube-rbac-proxy" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.151818 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="8510340e-32f0-4f11-82c2-d57eed3356be" containerName="oc" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.153018 5113 scope.go:117] "RemoveContainer" containerID="be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.158420 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.170283 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwzh4\" (UniqueName: \"kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4\") pod \"27afa170-d0be-48dd-a0d6-02a747bb8e63\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.170326 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides\") pod \"27afa170-d0be-48dd-a0d6-02a747bb8e63\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.170425 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config\") pod \"27afa170-d0be-48dd-a0d6-02a747bb8e63\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.170500 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert\") pod \"27afa170-d0be-48dd-a0d6-02a747bb8e63\" (UID: \"27afa170-d0be-48dd-a0d6-02a747bb8e63\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.171041 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "27afa170-d0be-48dd-a0d6-02a747bb8e63" (UID: "27afa170-d0be-48dd-a0d6-02a747bb8e63"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.171414 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "27afa170-d0be-48dd-a0d6-02a747bb8e63" (UID: "27afa170-d0be-48dd-a0d6-02a747bb8e63"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.178334 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4" (OuterVolumeSpecName: "kube-api-access-gwzh4") pod "27afa170-d0be-48dd-a0d6-02a747bb8e63" (UID: "27afa170-d0be-48dd-a0d6-02a747bb8e63"). InnerVolumeSpecName "kube-api-access-gwzh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.179175 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "27afa170-d0be-48dd-a0d6-02a747bb8e63" (UID: "27afa170-d0be-48dd-a0d6-02a747bb8e63"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.241958 5113 scope.go:117] "RemoveContainer" containerID="3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" Jan 21 09:28:54 crc kubenswrapper[5113]: E0121 09:28:54.242823 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715\": container with ID starting with 3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715 not found: ID does not exist" containerID="3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.242872 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715"} err="failed to get container status \"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715\": rpc error: code = NotFound desc = could not find container \"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715\": container with ID starting with 3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715 not found: ID does not exist" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.242899 5113 scope.go:117] "RemoveContainer" containerID="be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" Jan 21 09:28:54 crc kubenswrapper[5113]: E0121 09:28:54.243335 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0\": container with ID starting with be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0 not found: ID does not exist" containerID="be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.243375 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0"} err="failed to get container status \"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0\": rpc error: code = NotFound desc = could not find container \"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0\": container with ID starting with be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0 not found: ID does not exist" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.243399 5113 scope.go:117] "RemoveContainer" containerID="3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.243784 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715"} err="failed to get container status \"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715\": rpc error: code = NotFound desc = could not find container \"3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715\": container with ID starting with 3a619797c7b7d5cd486395c294ff9ec7d1ea07e371c4f9dd57c8a01d3c267715 not found: ID does not exist" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.243809 5113 scope.go:117] "RemoveContainer" containerID="be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.244007 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0"} err="failed to get container status \"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0\": rpc error: code = NotFound desc = could not find container \"be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0\": container with ID starting with be61f8bf6692f95398878d7cf592ca5e54db57c2c507cb7c3b04068563b154e0 not found: ID does not exist" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272144 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272193 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272238 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwnx\" (UniqueName: \"kubernetes.io/projected/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-kube-api-access-dbwnx\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272252 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272304 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272314 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwzh4\" (UniqueName: \"kubernetes.io/projected/27afa170-d0be-48dd-a0d6-02a747bb8e63-kube-api-access-gwzh4\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272324 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.272332 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/27afa170-d0be-48dd-a0d6-02a747bb8e63-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.373463 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.373527 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.373592 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dbwnx\" (UniqueName: \"kubernetes.io/projected/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-kube-api-access-dbwnx\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.373614 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.374304 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.374313 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.378211 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.397537 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbwnx\" (UniqueName: \"kubernetes.io/projected/eb9d2a8c-4c07-4932-a692-c692bd9a74bb-kube-api-access-dbwnx\") pod \"ovnkube-control-plane-97c9b6c48-lsgzv\" (UID: \"eb9d2a8c-4c07-4932-a692-c692bd9a74bb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.400306 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgkx4_4af3bb76-a840-45dd-941d-0b6ef5883ed8/ovn-acl-logging/0.log" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.400998 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgkx4_4af3bb76-a840-45dd-941d-0b6ef5883ed8/ovn-controller/0.log" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.401571 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.455045 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r7mlm"] Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.455919 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456071 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456191 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="nbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456293 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="nbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456395 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-node" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456484 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-node" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456587 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="northd" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456665 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="northd" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456807 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456890 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.456976 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="sbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457042 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="sbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457147 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-acl-logging" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457225 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-acl-logging" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457320 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovnkube-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457414 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovnkube-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457516 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kubecfg-setup" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457622 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kubecfg-setup" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.457888 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="nbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458000 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-acl-logging" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458078 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-node" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458155 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458229 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="northd" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458307 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovn-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458383 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="sbdb" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.458456 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerName="ovnkube-controller" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.465617 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk"] Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.465831 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.473515 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-8hbvk"] Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474102 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474159 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-659sf\" (UniqueName: \"kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474188 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474226 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474249 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474277 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474308 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474317 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474391 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474417 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474441 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474470 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474495 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474517 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474546 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474569 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474593 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474625 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474680 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474704 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474726 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log\") pod \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\" (UID: \"4af3bb76-a840-45dd-941d-0b6ef5883ed8\") " Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474824 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474854 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474873 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash" (OuterVolumeSpecName: "host-slash") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.474957 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475068 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket" (OuterVolumeSpecName: "log-socket") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475147 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475208 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475267 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475316 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log" (OuterVolumeSpecName: "node-log") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475363 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475402 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475375 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475432 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475446 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475662 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475693 5113 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475768 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475799 5113 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475823 5113 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475846 5113 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475869 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475893 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475914 5113 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475931 5113 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475949 5113 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475967 5113 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475983 5113 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.475999 5113 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.476016 5113 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.476031 5113 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.476047 5113 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.479489 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf" (OuterVolumeSpecName: "kube-api-access-659sf") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "kube-api-access-659sf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.480327 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.495142 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4af3bb76-a840-45dd-941d-0b6ef5883ed8" (UID: "4af3bb76-a840-45dd-941d-0b6ef5883ed8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.543278 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577570 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-script-lib\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577609 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovn-node-metrics-cert\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577634 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-ovn\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577697 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-systemd-units\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577721 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-netd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577764 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-bin\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577787 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-systemd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577812 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577832 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-config\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-log-socket\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577880 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws76\" (UniqueName: \"kubernetes.io/projected/6ab23e67-e0ab-4205-80ac-1b8900e15990-kube-api-access-xws76\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577901 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-node-log\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577916 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-slash\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577935 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-netns\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577950 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-var-lib-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577973 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-kubelet\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.577998 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578015 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-etc-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578034 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578052 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-env-overrides\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578086 5113 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4af3bb76-a840-45dd-941d-0b6ef5883ed8-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578096 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-659sf\" (UniqueName: \"kubernetes.io/projected/4af3bb76-a840-45dd-941d-0b6ef5883ed8-kube-api-access-659sf\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578105 5113 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.578113 5113 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4af3bb76-a840-45dd-941d-0b6ef5883ed8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679552 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xws76\" (UniqueName: \"kubernetes.io/projected/6ab23e67-e0ab-4205-80ac-1b8900e15990-kube-api-access-xws76\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679647 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-node-log\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679673 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-slash\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679698 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-netns\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679720 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-var-lib-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679776 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-node-log\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679787 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-kubelet\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679817 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-kubelet\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679839 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-netns\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679867 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679853 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-slash\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679853 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-run-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679920 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-var-lib-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679963 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-etc-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.679939 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-etc-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680044 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680079 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-env-overrides\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680111 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-script-lib\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680152 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovn-node-metrics-cert\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680145 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680783 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-ovn\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680886 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-systemd-units\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680917 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-netd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680945 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-bin\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.680966 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-systemd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681000 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681051 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-openvswitch\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681208 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-config\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681250 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-log-socket\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681555 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-env-overrides\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681640 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-netd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681695 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-ovn\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681713 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-log-socket\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681761 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-host-cni-bin\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681784 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-systemd-units\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681794 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6ab23e67-e0ab-4205-80ac-1b8900e15990-run-systemd\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.681943 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-script-lib\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.682338 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovnkube-config\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.695759 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ab23e67-e0ab-4205-80ac-1b8900e15990-ovn-node-metrics-cert\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.701699 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws76\" (UniqueName: \"kubernetes.io/projected/6ab23e67-e0ab-4205-80ac-1b8900e15990-kube-api-access-xws76\") pod \"ovnkube-node-r7mlm\" (UID: \"6ab23e67-e0ab-4205-80ac-1b8900e15990\") " pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.780459 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:28:54 crc kubenswrapper[5113]: W0121 09:28:54.799135 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ab23e67_e0ab_4205_80ac_1b8900e15990.slice/crio-c165a8de8867a9bc8633e64575ede660474e59d09b014231bdffd05226d523b3 WatchSource:0}: Error finding container c165a8de8867a9bc8633e64575ede660474e59d09b014231bdffd05226d523b3: Status 404 returned error can't find the container with id c165a8de8867a9bc8633e64575ede660474e59d09b014231bdffd05226d523b3 Jan 21 09:28:54 crc kubenswrapper[5113]: I0121 09:28:54.855907 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27afa170-d0be-48dd-a0d6-02a747bb8e63" path="/var/lib/kubelet/pods/27afa170-d0be-48dd-a0d6-02a747bb8e63/volumes" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.139259 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgkx4_4af3bb76-a840-45dd-941d-0b6ef5883ed8/ovn-acl-logging/0.log" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.139884 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qgkx4_4af3bb76-a840-45dd-941d-0b6ef5883ed8/ovn-controller/0.log" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140260 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140290 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140299 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140312 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140320 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140328 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140338 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" exitCode=143 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140351 5113 generic.go:358] "Generic (PLEG): container finished" podID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" exitCode=143 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140393 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140432 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140447 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140453 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140469 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140457 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140619 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140644 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140665 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140680 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140690 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140703 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140718 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140729 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140760 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140769 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140778 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140787 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140798 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140807 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140827 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140840 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140854 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140866 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140875 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140882 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140890 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140897 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140903 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140909 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140916 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140925 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qgkx4" event={"ID":"4af3bb76-a840-45dd-941d-0b6ef5883ed8","Type":"ContainerDied","Data":"a45e2e445afa3f2e6aa1828d0a90935a0ac6f11661a8aae3131f0361b3925386"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140937 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140949 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140956 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140962 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140969 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140975 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140981 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140987 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.140995 5113 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.143336 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" event={"ID":"eb9d2a8c-4c07-4932-a692-c692bd9a74bb","Type":"ContainerStarted","Data":"5d5d1d6cd154c3ef3993d9aa935dfe9920b3b51bdd7796b310328aac9a01c117"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.143386 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" event={"ID":"eb9d2a8c-4c07-4932-a692-c692bd9a74bb","Type":"ContainerStarted","Data":"a2b6c1f9f3e74a9a0f17ce3373e3ffb473742072bb9de7bde30e70ba53ddf49d"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.143402 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" event={"ID":"eb9d2a8c-4c07-4932-a692-c692bd9a74bb","Type":"ContainerStarted","Data":"338988aaf5518c848a90e92772bcc33893422e4717c61486d0a5079f29e0ac23"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.146621 5113 generic.go:358] "Generic (PLEG): container finished" podID="6ab23e67-e0ab-4205-80ac-1b8900e15990" containerID="92fdd4fbb024b0c866240fe3c592a52939824702bd34947d303fa220c001e976" exitCode=0 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.146720 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerDied","Data":"92fdd4fbb024b0c866240fe3c592a52939824702bd34947d303fa220c001e976"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.146766 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"c165a8de8867a9bc8633e64575ede660474e59d09b014231bdffd05226d523b3"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.151308 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.151431 5113 generic.go:358] "Generic (PLEG): container finished" podID="11da35cd-b282-4537-ac8f-b6c86b18c21f" containerID="ae662f5c068ffc7d4f5b76b096303acd87660f6089e6945d659a7a22cdde9e4e" exitCode=2 Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.151513 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcw7s" event={"ID":"11da35cd-b282-4537-ac8f-b6c86b18c21f","Type":"ContainerDied","Data":"ae662f5c068ffc7d4f5b76b096303acd87660f6089e6945d659a7a22cdde9e4e"} Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.152096 5113 scope.go:117] "RemoveContainer" containerID="ae662f5c068ffc7d4f5b76b096303acd87660f6089e6945d659a7a22cdde9e4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.165044 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgkx4"] Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.168031 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.169989 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qgkx4"] Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.177184 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-lsgzv" podStartSLOduration=2.17716513 podStartE2EDuration="2.17716513s" podCreationTimestamp="2026-01-21 09:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:28:55.176194213 +0000 UTC m=+664.677021282" watchObservedRunningTime="2026-01-21 09:28:55.17716513 +0000 UTC m=+664.677992189" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.188160 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.204894 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.253029 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.276604 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.293968 5113 scope.go:117] "RemoveContainer" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.310466 5113 scope.go:117] "RemoveContainer" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.326383 5113 scope.go:117] "RemoveContainer" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.341884 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.347191 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.347236 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} err="failed to get container status \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.347264 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.347506 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.347529 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} err="failed to get container status \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.347546 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.348165 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.348197 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} err="failed to get container status \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.348220 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.349910 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.349944 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} err="failed to get container status \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.349967 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.350510 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.350561 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} err="failed to get container status \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.350580 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.350964 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.350988 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} err="failed to get container status \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351002 5113 scope.go:117] "RemoveContainer" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.351200 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": container with ID starting with 9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49 not found: ID does not exist" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351219 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} err="failed to get container status \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": rpc error: code = NotFound desc = could not find container \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": container with ID starting with 9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351231 5113 scope.go:117] "RemoveContainer" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.351446 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": container with ID starting with 5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74 not found: ID does not exist" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351467 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} err="failed to get container status \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": rpc error: code = NotFound desc = could not find container \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": container with ID starting with 5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351480 5113 scope.go:117] "RemoveContainer" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: E0121 09:28:55.351770 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": container with ID starting with 34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08 not found: ID does not exist" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351799 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} err="failed to get container status \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": rpc error: code = NotFound desc = could not find container \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": container with ID starting with 34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.351836 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352113 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} err="failed to get container status \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352134 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352448 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} err="failed to get container status \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352464 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352684 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} err="failed to get container status \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352725 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352939 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} err="failed to get container status \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.352956 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.353282 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} err="failed to get container status \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.353308 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.353485 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} err="failed to get container status \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.353501 5113 scope.go:117] "RemoveContainer" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354143 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} err="failed to get container status \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": rpc error: code = NotFound desc = could not find container \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": container with ID starting with 9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354167 5113 scope.go:117] "RemoveContainer" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354373 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} err="failed to get container status \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": rpc error: code = NotFound desc = could not find container \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": container with ID starting with 5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354391 5113 scope.go:117] "RemoveContainer" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354625 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} err="failed to get container status \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": rpc error: code = NotFound desc = could not find container \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": container with ID starting with 34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354643 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354860 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} err="failed to get container status \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.354879 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355094 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} err="failed to get container status \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355120 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355335 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} err="failed to get container status \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355355 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355659 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} err="failed to get container status \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355676 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355910 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} err="failed to get container status \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.355936 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.356216 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} err="failed to get container status \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.356261 5113 scope.go:117] "RemoveContainer" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.356680 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} err="failed to get container status \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": rpc error: code = NotFound desc = could not find container \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": container with ID starting with 9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.356707 5113 scope.go:117] "RemoveContainer" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.356985 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} err="failed to get container status \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": rpc error: code = NotFound desc = could not find container \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": container with ID starting with 5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.357002 5113 scope.go:117] "RemoveContainer" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.357260 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} err="failed to get container status \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": rpc error: code = NotFound desc = could not find container \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": container with ID starting with 34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.357281 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.357473 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} err="failed to get container status \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.357495 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.358655 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} err="failed to get container status \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.358675 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359017 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} err="failed to get container status \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359063 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359407 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} err="failed to get container status \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359446 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359683 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} err="failed to get container status \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.359699 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360034 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} err="failed to get container status \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360050 5113 scope.go:117] "RemoveContainer" containerID="9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360259 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49"} err="failed to get container status \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": rpc error: code = NotFound desc = could not find container \"9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49\": container with ID starting with 9822e6e2fc04b0bc932b908f806095755edfb24f3e4f2fae7b5f7c1911ec8e49 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360275 5113 scope.go:117] "RemoveContainer" containerID="5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360476 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74"} err="failed to get container status \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": rpc error: code = NotFound desc = could not find container \"5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74\": container with ID starting with 5fc370078d7251e4b7b4f881fd5ada8d75b9ceb62d1f69a492b485a6b2c2ef74 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360500 5113 scope.go:117] "RemoveContainer" containerID="34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360808 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08"} err="failed to get container status \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": rpc error: code = NotFound desc = could not find container \"34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08\": container with ID starting with 34e19902f610c0a75a9fbc00cb8c7cd18dbe5ea294e8217d48b706643672ee08 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.360824 5113 scope.go:117] "RemoveContainer" containerID="4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361062 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57"} err="failed to get container status \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": rpc error: code = NotFound desc = could not find container \"4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57\": container with ID starting with 4e8b9346b62936928cc5545618165392c34310abf146dfe5ecbd67ae78585b57 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361110 5113 scope.go:117] "RemoveContainer" containerID="646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361411 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab"} err="failed to get container status \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": rpc error: code = NotFound desc = could not find container \"646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab\": container with ID starting with 646286a53000eee78bc55c5568670e1f7cd99166697afef7e67a9045c29696ab not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361427 5113 scope.go:117] "RemoveContainer" containerID="97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361618 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f"} err="failed to get container status \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": rpc error: code = NotFound desc = could not find container \"97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f\": container with ID starting with 97c631ada2ff6f7358da07a083e927818a0f340b60e2433ab787876433daf53f not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361633 5113 scope.go:117] "RemoveContainer" containerID="2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361834 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc"} err="failed to get container status \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": rpc error: code = NotFound desc = could not find container \"2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc\": container with ID starting with 2bbcfc4fe67fd6964c0a676c31d53d6a41ce547a0cea1860f431a866d57f2fbc not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.361848 5113 scope.go:117] "RemoveContainer" containerID="b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.362037 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9"} err="failed to get container status \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": rpc error: code = NotFound desc = could not find container \"b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9\": container with ID starting with b76c6682140eef08a669526caeec358c600392a2765da92707cc82c75de617c9 not found: ID does not exist" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.362054 5113 scope.go:117] "RemoveContainer" containerID="4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e" Jan 21 09:28:55 crc kubenswrapper[5113]: I0121 09:28:55.362249 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e"} err="failed to get container status \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": rpc error: code = NotFound desc = could not find container \"4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e\": container with ID starting with 4cd689a184396b05630e15f65947e3038ab9c331ef0d3475ec9efea6e4af0c4e not found: ID does not exist" Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.160140 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.160554 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcw7s" event={"ID":"11da35cd-b282-4537-ac8f-b6c86b18c21f","Type":"ContainerStarted","Data":"00d546de27f08e656636ae3ae6f4e1cdd22a52f3c05aa02bd3b78c9a01e828de"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166100 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"eed83b454319aa96a93f6655ea7020417bac487f5deaf1f4d4932402f6e89f60"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166153 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"f4ae73268f1a98bde3903a87c3fb7befc16498c30838748bfe4291f53e39ea5a"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166172 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"fa708c40a42487c65fac5ab8adad4458276bc5406db2dbefad54ef447d168729"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166188 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"8d50c80bbb65f3d02c5d33b6728c76d8c977c444c99cd2f93461ae709ffbeb49"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166204 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"d0e85610e4b8caa67bd88d3482be92d3421e4a1b77c03fbb43f2ef1a445dccaa"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.166218 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"c34b55731a3fb22469ed7b8e1e1e541338553d142b71836b2c5033ef71e96506"} Jan 21 09:28:56 crc kubenswrapper[5113]: I0121 09:28:56.857302 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af3bb76-a840-45dd-941d-0b6ef5883ed8" path="/var/lib/kubelet/pods/4af3bb76-a840-45dd-941d-0b6ef5883ed8/volumes" Jan 21 09:28:59 crc kubenswrapper[5113]: I0121 09:28:59.189493 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"5ee7beed7b07c937d2fbae58d9a051382841a54c2020326a67fa5e74af868a56"} Jan 21 09:29:01 crc kubenswrapper[5113]: I0121 09:29:01.210964 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" event={"ID":"6ab23e67-e0ab-4205-80ac-1b8900e15990","Type":"ContainerStarted","Data":"d7ad1f1d26287d384e78472fe81611dab859409074afb881cbf04c557e24cb7c"} Jan 21 09:29:01 crc kubenswrapper[5113]: I0121 09:29:01.258348 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" podStartSLOduration=7.258327179 podStartE2EDuration="7.258327179s" podCreationTimestamp="2026-01-21 09:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:29:01.257971359 +0000 UTC m=+670.758798418" watchObservedRunningTime="2026-01-21 09:29:01.258327179 +0000 UTC m=+670.759154238" Jan 21 09:29:02 crc kubenswrapper[5113]: I0121 09:29:02.219928 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:02 crc kubenswrapper[5113]: I0121 09:29:02.220014 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:02 crc kubenswrapper[5113]: I0121 09:29:02.220041 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:02 crc kubenswrapper[5113]: I0121 09:29:02.263603 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:02 crc kubenswrapper[5113]: I0121 09:29:02.269063 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:34 crc kubenswrapper[5113]: I0121 09:29:34.271304 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r7mlm" Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.782501 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.808602 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.808653 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.928273 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xpjm\" (UniqueName: \"kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.929036 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:53 crc kubenswrapper[5113]: I0121 09:29:53.929055 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.030082 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xpjm\" (UniqueName: \"kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.030144 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.030281 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.030638 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.030756 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.057299 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xpjm\" (UniqueName: \"kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm\") pod \"community-operators-jxprx\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.125398 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.391811 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.574219 5113 generic.go:358] "Generic (PLEG): container finished" podID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerID="a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03" exitCode=0 Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.574307 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerDied","Data":"a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03"} Jan 21 09:29:54 crc kubenswrapper[5113]: I0121 09:29:54.574332 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerStarted","Data":"96a6cb157c76bb273a43b9a5cf585ed9888b73fd8633107b309a89393b3b5e39"} Jan 21 09:29:56 crc kubenswrapper[5113]: I0121 09:29:56.592535 5113 generic.go:358] "Generic (PLEG): container finished" podID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerID="a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565" exitCode=0 Jan 21 09:29:56 crc kubenswrapper[5113]: I0121 09:29:56.592719 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerDied","Data":"a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565"} Jan 21 09:29:57 crc kubenswrapper[5113]: I0121 09:29:57.603260 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerStarted","Data":"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a"} Jan 21 09:29:57 crc kubenswrapper[5113]: I0121 09:29:57.631358 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jxprx" podStartSLOduration=3.699996821 podStartE2EDuration="4.631334484s" podCreationTimestamp="2026-01-21 09:29:53 +0000 UTC" firstStartedPulling="2026-01-21 09:29:54.575438321 +0000 UTC m=+724.076265370" lastFinishedPulling="2026-01-21 09:29:55.506775944 +0000 UTC m=+725.007603033" observedRunningTime="2026-01-21 09:29:57.630610013 +0000 UTC m=+727.131437082" watchObservedRunningTime="2026-01-21 09:29:57.631334484 +0000 UTC m=+727.132161563" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.144826 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483130-7wqzp"] Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.191698 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97"] Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.191902 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.194176 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.194592 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.196821 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483130-7wqzp"] Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.196849 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97"] Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.196936 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.197712 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.198519 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.199508 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.313905 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.313972 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ksl\" (UniqueName: \"kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.314063 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.314118 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw87x\" (UniqueName: \"kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x\") pod \"auto-csr-approver-29483130-7wqzp\" (UID: \"08901054-a8e8-48af-b623-594a806778e6\") " pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.415158 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ksl\" (UniqueName: \"kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.415239 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.415271 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mw87x\" (UniqueName: \"kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x\") pod \"auto-csr-approver-29483130-7wqzp\" (UID: \"08901054-a8e8-48af-b623-594a806778e6\") " pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.415726 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.416857 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.429368 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.435596 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ksl\" (UniqueName: \"kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl\") pod \"collect-profiles-29483130-xkb97\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.444547 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw87x\" (UniqueName: \"kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x\") pod \"auto-csr-approver-29483130-7wqzp\" (UID: \"08901054-a8e8-48af-b623-594a806778e6\") " pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.519290 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.524683 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.751578 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97"] Jan 21 09:30:00 crc kubenswrapper[5113]: W0121 09:30:00.763714 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc41973b_d903_4dfd_854d_6da1717bc76e.slice/crio-a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313 WatchSource:0}: Error finding container a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313: Status 404 returned error can't find the container with id a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313 Jan 21 09:30:00 crc kubenswrapper[5113]: I0121 09:30:00.800507 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483130-7wqzp"] Jan 21 09:30:00 crc kubenswrapper[5113]: W0121 09:30:00.804620 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08901054_a8e8_48af_b623_594a806778e6.slice/crio-3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f WatchSource:0}: Error finding container 3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f: Status 404 returned error can't find the container with id 3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f Jan 21 09:30:01 crc kubenswrapper[5113]: I0121 09:30:01.638251 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" event={"ID":"08901054-a8e8-48af-b623-594a806778e6","Type":"ContainerStarted","Data":"3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f"} Jan 21 09:30:01 crc kubenswrapper[5113]: I0121 09:30:01.640726 5113 generic.go:358] "Generic (PLEG): container finished" podID="cc41973b-d903-4dfd-854d-6da1717bc76e" containerID="0b3c844b04444e58eb5f5492cb43b305cf14fa6a24471c7dfeb8bdecb5cdc73e" exitCode=0 Jan 21 09:30:01 crc kubenswrapper[5113]: I0121 09:30:01.640865 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" event={"ID":"cc41973b-d903-4dfd-854d-6da1717bc76e","Type":"ContainerDied","Data":"0b3c844b04444e58eb5f5492cb43b305cf14fa6a24471c7dfeb8bdecb5cdc73e"} Jan 21 09:30:01 crc kubenswrapper[5113]: I0121 09:30:01.641186 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" event={"ID":"cc41973b-d903-4dfd-854d-6da1717bc76e","Type":"ContainerStarted","Data":"a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313"} Jan 21 09:30:02 crc kubenswrapper[5113]: I0121 09:30:02.650274 5113 generic.go:358] "Generic (PLEG): container finished" podID="08901054-a8e8-48af-b623-594a806778e6" containerID="ff3e4286ade44b5c247cee413574dfb839eb51ac8704495a4c14afc2a7415930" exitCode=0 Jan 21 09:30:02 crc kubenswrapper[5113]: I0121 09:30:02.650975 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" event={"ID":"08901054-a8e8-48af-b623-594a806778e6","Type":"ContainerDied","Data":"ff3e4286ade44b5c247cee413574dfb839eb51ac8704495a4c14afc2a7415930"} Jan 21 09:30:02 crc kubenswrapper[5113]: I0121 09:30:02.986562 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.151435 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7ksl\" (UniqueName: \"kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl\") pod \"cc41973b-d903-4dfd-854d-6da1717bc76e\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.151611 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume\") pod \"cc41973b-d903-4dfd-854d-6da1717bc76e\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.151863 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume\") pod \"cc41973b-d903-4dfd-854d-6da1717bc76e\" (UID: \"cc41973b-d903-4dfd-854d-6da1717bc76e\") " Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.152592 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc41973b-d903-4dfd-854d-6da1717bc76e" (UID: "cc41973b-d903-4dfd-854d-6da1717bc76e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.152777 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc41973b-d903-4dfd-854d-6da1717bc76e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.158890 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl" (OuterVolumeSpecName: "kube-api-access-h7ksl") pod "cc41973b-d903-4dfd-854d-6da1717bc76e" (UID: "cc41973b-d903-4dfd-854d-6da1717bc76e"). InnerVolumeSpecName "kube-api-access-h7ksl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.159239 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc41973b-d903-4dfd-854d-6da1717bc76e" (UID: "cc41973b-d903-4dfd-854d-6da1717bc76e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.254009 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc41973b-d903-4dfd-854d-6da1717bc76e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.254072 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h7ksl\" (UniqueName: \"kubernetes.io/projected/cc41973b-d903-4dfd-854d-6da1717bc76e-kube-api-access-h7ksl\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.660568 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" event={"ID":"cc41973b-d903-4dfd-854d-6da1717bc76e","Type":"ContainerDied","Data":"a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313"} Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.660779 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a94bfdbba388daf6f547b4481a4f3ae5be01f404e51d850d7ea1b0f69e91f313" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.660643 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97" Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.690249 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.691027 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vgn4p" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="registry-server" containerID="cri-o://ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072" gracePeriod=30 Jan 21 09:30:03 crc kubenswrapper[5113]: I0121 09:30:03.900366 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.026775 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.065761 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw87x\" (UniqueName: \"kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x\") pod \"08901054-a8e8-48af-b623-594a806778e6\" (UID: \"08901054-a8e8-48af-b623-594a806778e6\") " Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.072109 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x" (OuterVolumeSpecName: "kube-api-access-mw87x") pod "08901054-a8e8-48af-b623-594a806778e6" (UID: "08901054-a8e8-48af-b623-594a806778e6"). InnerVolumeSpecName "kube-api-access-mw87x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.126021 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.126092 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.166997 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.167197 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzqk\" (UniqueName: \"kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk\") pod \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.167341 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities\") pod \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.167384 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content\") pod \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\" (UID: \"b1401cee-74bd-45dd-b2c8-e9ff222854dc\") " Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.167981 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mw87x\" (UniqueName: \"kubernetes.io/projected/08901054-a8e8-48af-b623-594a806778e6-kube-api-access-mw87x\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.168476 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities" (OuterVolumeSpecName: "utilities") pod "b1401cee-74bd-45dd-b2c8-e9ff222854dc" (UID: "b1401cee-74bd-45dd-b2c8-e9ff222854dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.170204 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk" (OuterVolumeSpecName: "kube-api-access-sxzqk") pod "b1401cee-74bd-45dd-b2c8-e9ff222854dc" (UID: "b1401cee-74bd-45dd-b2c8-e9ff222854dc"). InnerVolumeSpecName "kube-api-access-sxzqk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.180447 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1401cee-74bd-45dd-b2c8-e9ff222854dc" (UID: "b1401cee-74bd-45dd-b2c8-e9ff222854dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.269397 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxzqk\" (UniqueName: \"kubernetes.io/projected/b1401cee-74bd-45dd-b2c8-e9ff222854dc-kube-api-access-sxzqk\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.269461 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.269486 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1401cee-74bd-45dd-b2c8-e9ff222854dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.667476 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.667506 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483130-7wqzp" event={"ID":"08901054-a8e8-48af-b623-594a806778e6","Type":"ContainerDied","Data":"3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f"} Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.667531 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3818d6c94cdd1e21e90c3e1294f73e15c3a57e4793146357adbdfcd5aa935e4f" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.669923 5113 generic.go:358] "Generic (PLEG): container finished" podID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerID="ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072" exitCode=0 Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.670952 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgn4p" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.670972 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerDied","Data":"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072"} Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.671017 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgn4p" event={"ID":"b1401cee-74bd-45dd-b2c8-e9ff222854dc","Type":"ContainerDied","Data":"46b40d05b26d3d86cac52545686f98e5db6498b1c0d5d7957e17e91ab1665341"} Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.671039 5113 scope.go:117] "RemoveContainer" containerID="ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.688365 5113 scope.go:117] "RemoveContainer" containerID="43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.704777 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.713388 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgn4p"] Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.722959 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.728711 5113 scope.go:117] "RemoveContainer" containerID="bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.745799 5113 scope.go:117] "RemoveContainer" containerID="ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072" Jan 21 09:30:04 crc kubenswrapper[5113]: E0121 09:30:04.746801 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072\": container with ID starting with ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072 not found: ID does not exist" containerID="ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.747302 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072"} err="failed to get container status \"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072\": rpc error: code = NotFound desc = could not find container \"ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072\": container with ID starting with ab56acb7d7845a18d1249854b7c3702f03a40eaf2223d24d327ac7b1ba5be072 not found: ID does not exist" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.747397 5113 scope.go:117] "RemoveContainer" containerID="43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18" Jan 21 09:30:04 crc kubenswrapper[5113]: E0121 09:30:04.748033 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18\": container with ID starting with 43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18 not found: ID does not exist" containerID="43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.748083 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18"} err="failed to get container status \"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18\": rpc error: code = NotFound desc = could not find container \"43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18\": container with ID starting with 43768afbb54fa95392c3eed2b82ccaab3c0ee98a5a97981ab37d06055f9c1b18 not found: ID does not exist" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.748153 5113 scope.go:117] "RemoveContainer" containerID="bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740" Jan 21 09:30:04 crc kubenswrapper[5113]: E0121 09:30:04.749187 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740\": container with ID starting with bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740 not found: ID does not exist" containerID="bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.749237 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740"} err="failed to get container status \"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740\": rpc error: code = NotFound desc = could not find container \"bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740\": container with ID starting with bc897800138f9fcd2e692cde96b786a07e3fdf18bb6197717a34c5f9502c5740 not found: ID does not exist" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.761489 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.850611 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" path="/var/lib/kubelet/pods/b1401cee-74bd-45dd-b2c8-e9ff222854dc/volumes" Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.959498 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483124-xtdgv"] Jan 21 09:30:04 crc kubenswrapper[5113]: I0121 09:30:04.963707 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483124-xtdgv"] Jan 21 09:30:06 crc kubenswrapper[5113]: I0121 09:30:06.683722 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jxprx" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="registry-server" containerID="cri-o://b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a" gracePeriod=2 Jan 21 09:30:06 crc kubenswrapper[5113]: I0121 09:30:06.853391 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894" path="/var/lib/kubelet/pods/16a2bb76-5c6a-4cbb-a2bb-6a6cc8687894/volumes" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.566859 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.614478 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities\") pod \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.614627 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xpjm\" (UniqueName: \"kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm\") pod \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.614815 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content\") pod \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\" (UID: \"139dd7e3-50a7-47e3-9047-4f3abb6ea184\") " Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.615930 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities" (OuterVolumeSpecName: "utilities") pod "139dd7e3-50a7-47e3-9047-4f3abb6ea184" (UID: "139dd7e3-50a7-47e3-9047-4f3abb6ea184"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.632374 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm" (OuterVolumeSpecName: "kube-api-access-7xpjm") pod "139dd7e3-50a7-47e3-9047-4f3abb6ea184" (UID: "139dd7e3-50a7-47e3-9047-4f3abb6ea184"). InnerVolumeSpecName "kube-api-access-7xpjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.669289 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "139dd7e3-50a7-47e3-9047-4f3abb6ea184" (UID: "139dd7e3-50a7-47e3-9047-4f3abb6ea184"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.694356 5113 generic.go:358] "Generic (PLEG): container finished" podID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerID="b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a" exitCode=0 Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.694445 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jxprx" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.694484 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerDied","Data":"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a"} Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.695083 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jxprx" event={"ID":"139dd7e3-50a7-47e3-9047-4f3abb6ea184","Type":"ContainerDied","Data":"96a6cb157c76bb273a43b9a5cf585ed9888b73fd8633107b309a89393b3b5e39"} Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.695107 5113 scope.go:117] "RemoveContainer" containerID="b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.717619 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xpjm\" (UniqueName: \"kubernetes.io/projected/139dd7e3-50a7-47e3-9047-4f3abb6ea184-kube-api-access-7xpjm\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.717653 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.717666 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/139dd7e3-50a7-47e3-9047-4f3abb6ea184-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.721607 5113 scope.go:117] "RemoveContainer" containerID="a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.728595 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.732444 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jxprx"] Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.748117 5113 scope.go:117] "RemoveContainer" containerID="a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.765474 5113 scope.go:117] "RemoveContainer" containerID="b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a" Jan 21 09:30:07 crc kubenswrapper[5113]: E0121 09:30:07.766006 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a\": container with ID starting with b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a not found: ID does not exist" containerID="b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.766031 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a"} err="failed to get container status \"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a\": rpc error: code = NotFound desc = could not find container \"b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a\": container with ID starting with b0236896b893365d22e06de784285d0c3730dd5f30195c8b3859d6740e315c5a not found: ID does not exist" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.766051 5113 scope.go:117] "RemoveContainer" containerID="a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565" Jan 21 09:30:07 crc kubenswrapper[5113]: E0121 09:30:07.766377 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565\": container with ID starting with a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565 not found: ID does not exist" containerID="a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.766432 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565"} err="failed to get container status \"a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565\": rpc error: code = NotFound desc = could not find container \"a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565\": container with ID starting with a4daa88da17233ad2970ef39c3983fb31fdac0ac0bfd494762a2ca2c71121565 not found: ID does not exist" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.766467 5113 scope.go:117] "RemoveContainer" containerID="a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03" Jan 21 09:30:07 crc kubenswrapper[5113]: E0121 09:30:07.766972 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03\": container with ID starting with a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03 not found: ID does not exist" containerID="a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03" Jan 21 09:30:07 crc kubenswrapper[5113]: I0121 09:30:07.766993 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03"} err="failed to get container status \"a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03\": rpc error: code = NotFound desc = could not find container \"a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03\": container with ID starting with a8258dac9c23314cc7f7aa0eea240698ebf915189d381d353f841921b3777d03 not found: ID does not exist" Jan 21 09:30:08 crc kubenswrapper[5113]: I0121 09:30:08.849495 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" path="/var/lib/kubelet/pods/139dd7e3-50a7-47e3-9047-4f3abb6ea184/volumes" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.859119 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp"] Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860803 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860841 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860895 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860912 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860970 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="extract-utilities" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.860987 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="extract-utilities" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861020 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc41973b-d903-4dfd-854d-6da1717bc76e" containerName="collect-profiles" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861034 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc41973b-d903-4dfd-854d-6da1717bc76e" containerName="collect-profiles" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861070 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="extract-utilities" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861083 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="extract-utilities" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861111 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="extract-content" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861125 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="extract-content" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861151 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="08901054-a8e8-48af-b623-594a806778e6" containerName="oc" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861165 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="08901054-a8e8-48af-b623-594a806778e6" containerName="oc" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861187 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="extract-content" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861202 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="extract-content" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861396 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1401cee-74bd-45dd-b2c8-e9ff222854dc" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861418 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="08901054-a8e8-48af-b623-594a806778e6" containerName="oc" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861449 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="139dd7e3-50a7-47e3-9047-4f3abb6ea184" containerName="registry-server" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.861466 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc41973b-d903-4dfd-854d-6da1717bc76e" containerName="collect-profiles" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.890010 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp"] Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.890206 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.893067 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.943880 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.943946 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:09 crc kubenswrapper[5113]: I0121 09:30:09.944032 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r26kj\" (UniqueName: \"kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.046089 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r26kj\" (UniqueName: \"kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.046243 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.046419 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.047331 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.047585 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.066239 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r26kj\" (UniqueName: \"kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.207625 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.496886 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp"] Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.715358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerStarted","Data":"5c08907bdf02070d2590740ebf802c7d43cf6cf38e64b724c94182be6d3ad0d8"} Jan 21 09:30:10 crc kubenswrapper[5113]: I0121 09:30:10.715407 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerStarted","Data":"7d1c43360880caf4a526bb05e9c9e7ae2218ba501ecae190c22a5fb3615244ee"} Jan 21 09:30:11 crc kubenswrapper[5113]: I0121 09:30:11.725776 5113 generic.go:358] "Generic (PLEG): container finished" podID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerID="5c08907bdf02070d2590740ebf802c7d43cf6cf38e64b724c94182be6d3ad0d8" exitCode=0 Jan 21 09:30:11 crc kubenswrapper[5113]: I0121 09:30:11.726127 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerDied","Data":"5c08907bdf02070d2590740ebf802c7d43cf6cf38e64b724c94182be6d3ad0d8"} Jan 21 09:30:13 crc kubenswrapper[5113]: I0121 09:30:13.743761 5113 generic.go:358] "Generic (PLEG): container finished" podID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerID="8f06fbb27b9fdf5f90eff6e96edc5dd3893d7fbc1aa4964ec8ee96a1c31bd440" exitCode=0 Jan 21 09:30:13 crc kubenswrapper[5113]: I0121 09:30:13.743841 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerDied","Data":"8f06fbb27b9fdf5f90eff6e96edc5dd3893d7fbc1aa4964ec8ee96a1c31bd440"} Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.408272 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.417287 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.437822 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.513005 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.513082 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxt5h\" (UniqueName: \"kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.513183 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.614729 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.614879 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hxt5h\" (UniqueName: \"kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.614952 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.615398 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.615576 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.643524 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxt5h\" (UniqueName: \"kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h\") pod \"redhat-operators-8qcgd\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.744397 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.754893 5113 generic.go:358] "Generic (PLEG): container finished" podID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerID="2da8ea0e06389fbc07266ad8d29a523cfe27f25201c31796c2a205fbf48f41f6" exitCode=0 Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.754952 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerDied","Data":"2da8ea0e06389fbc07266ad8d29a523cfe27f25201c31796c2a205fbf48f41f6"} Jan 21 09:30:14 crc kubenswrapper[5113]: I0121 09:30:14.998400 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:15 crc kubenswrapper[5113]: W0121 09:30:15.009354 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a48d508_39d9_4ba5_bc97_1355f781b5b2.slice/crio-45535633d56219d80ddc51bb89a7e7e3c173d9faa6090cc9b1c82c9b06d5bb40 WatchSource:0}: Error finding container 45535633d56219d80ddc51bb89a7e7e3c173d9faa6090cc9b1c82c9b06d5bb40: Status 404 returned error can't find the container with id 45535633d56219d80ddc51bb89a7e7e3c173d9faa6090cc9b1c82c9b06d5bb40 Jan 21 09:30:15 crc kubenswrapper[5113]: I0121 09:30:15.764065 5113 generic.go:358] "Generic (PLEG): container finished" podID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerID="208e1cd34110471fbf0e4ce309d2fd9557a9b092aa86bee2d19c21d11974ba84" exitCode=0 Jan 21 09:30:15 crc kubenswrapper[5113]: I0121 09:30:15.764148 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerDied","Data":"208e1cd34110471fbf0e4ce309d2fd9557a9b092aa86bee2d19c21d11974ba84"} Jan 21 09:30:15 crc kubenswrapper[5113]: I0121 09:30:15.764657 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerStarted","Data":"45535633d56219d80ddc51bb89a7e7e3c173d9faa6090cc9b1c82c9b06d5bb40"} Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.110606 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.232437 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r26kj\" (UniqueName: \"kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj\") pod \"7284cc68-b573-49c7-b1cd-3c46715c1604\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.232514 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util\") pod \"7284cc68-b573-49c7-b1cd-3c46715c1604\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.232553 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle\") pod \"7284cc68-b573-49c7-b1cd-3c46715c1604\" (UID: \"7284cc68-b573-49c7-b1cd-3c46715c1604\") " Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.234636 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle" (OuterVolumeSpecName: "bundle") pod "7284cc68-b573-49c7-b1cd-3c46715c1604" (UID: "7284cc68-b573-49c7-b1cd-3c46715c1604"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.240473 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj" (OuterVolumeSpecName: "kube-api-access-r26kj") pod "7284cc68-b573-49c7-b1cd-3c46715c1604" (UID: "7284cc68-b573-49c7-b1cd-3c46715c1604"). InnerVolumeSpecName "kube-api-access-r26kj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.252885 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util" (OuterVolumeSpecName: "util") pod "7284cc68-b573-49c7-b1cd-3c46715c1604" (UID: "7284cc68-b573-49c7-b1cd-3c46715c1604"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.334554 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.334605 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r26kj\" (UniqueName: \"kubernetes.io/projected/7284cc68-b573-49c7-b1cd-3c46715c1604-kube-api-access-r26kj\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.334627 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7284cc68-b573-49c7-b1cd-3c46715c1604-util\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.776457 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.776461 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp" event={"ID":"7284cc68-b573-49c7-b1cd-3c46715c1604","Type":"ContainerDied","Data":"7d1c43360880caf4a526bb05e9c9e7ae2218ba501ecae190c22a5fb3615244ee"} Jan 21 09:30:16 crc kubenswrapper[5113]: I0121 09:30:16.777030 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d1c43360880caf4a526bb05e9c9e7ae2218ba501ecae190c22a5fb3615244ee" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.248996 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w"] Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249463 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="pull" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249479 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="pull" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249498 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="util" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249503 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="util" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249521 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="extract" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249526 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="extract" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.249606 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="7284cc68-b573-49c7-b1cd-3c46715c1604" containerName="extract" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.252892 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.257565 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.264222 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w"] Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.348343 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.348397 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.348429 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4hvb\" (UniqueName: \"kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.449251 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.449306 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.449335 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4hvb\" (UniqueName: \"kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.450204 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.450222 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.470398 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4hvb\" (UniqueName: \"kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.601157 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.787030 5113 generic.go:358] "Generic (PLEG): container finished" podID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerID="5c90b4924c242fcfd92ef487c83373ec5d9e713bdcc3f831886a70bfebbf5c7b" exitCode=0 Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.787091 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerDied","Data":"5c90b4924c242fcfd92ef487c83373ec5d9e713bdcc3f831886a70bfebbf5c7b"} Jan 21 09:30:17 crc kubenswrapper[5113]: I0121 09:30:17.812424 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w"] Jan 21 09:30:17 crc kubenswrapper[5113]: W0121 09:30:17.835630 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6464cb6c_1ad4_4eef_b492_4351e8fb8d3a.slice/crio-3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5 WatchSource:0}: Error finding container 3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5: Status 404 returned error can't find the container with id 3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5 Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.238866 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw"] Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.242878 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.258077 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw"] Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.363550 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.363635 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.363706 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ds8q\" (UniqueName: \"kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.465626 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.465808 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.465902 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ds8q\" (UniqueName: \"kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.467254 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.467383 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.500694 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ds8q\" (UniqueName: \"kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.565405 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.800325 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerStarted","Data":"8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315"} Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.802385 5113 generic.go:358] "Generic (PLEG): container finished" podID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerID="3d8fd36ef7ff8daa4e61b86593defda0b439d91ef4a5b627882b6c7bf4f2d6c8" exitCode=0 Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.802506 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerDied","Data":"3d8fd36ef7ff8daa4e61b86593defda0b439d91ef4a5b627882b6c7bf4f2d6c8"} Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.802551 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerStarted","Data":"3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5"} Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.808265 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw"] Jan 21 09:30:18 crc kubenswrapper[5113]: I0121 09:30:18.827516 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8qcgd" podStartSLOduration=3.766751675 podStartE2EDuration="4.827483906s" podCreationTimestamp="2026-01-21 09:30:14 +0000 UTC" firstStartedPulling="2026-01-21 09:30:15.765779594 +0000 UTC m=+745.266606683" lastFinishedPulling="2026-01-21 09:30:16.826511835 +0000 UTC m=+746.327338914" observedRunningTime="2026-01-21 09:30:18.819857485 +0000 UTC m=+748.320684554" watchObservedRunningTime="2026-01-21 09:30:18.827483906 +0000 UTC m=+748.328310965" Jan 21 09:30:19 crc kubenswrapper[5113]: I0121 09:30:19.811335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerStarted","Data":"66f3c09cfdcd387db3dd44a9d5c5efe00282302767a3e6f2d7d10366b0e007c6"} Jan 21 09:30:19 crc kubenswrapper[5113]: I0121 09:30:19.814336 5113 generic.go:358] "Generic (PLEG): container finished" podID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerID="fca4a8264d4c39d232cb5c569278140cc09dabb52c03068903bd9154ffd3522f" exitCode=0 Jan 21 09:30:19 crc kubenswrapper[5113]: I0121 09:30:19.814509 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerDied","Data":"fca4a8264d4c39d232cb5c569278140cc09dabb52c03068903bd9154ffd3522f"} Jan 21 09:30:19 crc kubenswrapper[5113]: I0121 09:30:19.814576 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerStarted","Data":"afb8722989d1ade038395372acec9c71132e793bcdefd24b364eef27faa40d27"} Jan 21 09:30:20 crc kubenswrapper[5113]: I0121 09:30:20.823839 5113 generic.go:358] "Generic (PLEG): container finished" podID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerID="66f3c09cfdcd387db3dd44a9d5c5efe00282302767a3e6f2d7d10366b0e007c6" exitCode=0 Jan 21 09:30:20 crc kubenswrapper[5113]: I0121 09:30:20.823951 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerDied","Data":"66f3c09cfdcd387db3dd44a9d5c5efe00282302767a3e6f2d7d10366b0e007c6"} Jan 21 09:30:21 crc kubenswrapper[5113]: I0121 09:30:21.831065 5113 generic.go:358] "Generic (PLEG): container finished" podID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerID="c41ecf41f0ffc2896db9dfb4ba837bccd8cfc181c952d9d025789d092ba7d3b0" exitCode=0 Jan 21 09:30:21 crc kubenswrapper[5113]: I0121 09:30:21.831169 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerDied","Data":"c41ecf41f0ffc2896db9dfb4ba837bccd8cfc181c952d9d025789d092ba7d3b0"} Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.389437 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b"] Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.602552 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b"] Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.602592 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.602783 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.613755 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.613870 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.627263 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcstq\" (UniqueName: \"kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.627349 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.627403 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728405 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728502 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jcstq\" (UniqueName: \"kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728550 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728751 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdpwk\" (UniqueName: \"kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728860 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.728933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.729447 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.729509 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.754645 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcstq\" (UniqueName: \"kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.830694 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.830804 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdpwk\" (UniqueName: \"kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.831262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.831301 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.831689 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.876674 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdpwk\" (UniqueName: \"kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk\") pod \"certified-operators-phvjc\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.917947 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:22 crc kubenswrapper[5113]: I0121 09:30:22.934651 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.414766 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.438365 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util\") pod \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.438486 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle\") pod \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.438524 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4hvb\" (UniqueName: \"kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb\") pod \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\" (UID: \"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a\") " Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.440012 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle" (OuterVolumeSpecName: "bundle") pod "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" (UID: "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.445599 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb" (OuterVolumeSpecName: "kube-api-access-d4hvb") pod "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" (UID: "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a"). InnerVolumeSpecName "kube-api-access-d4hvb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.473919 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util" (OuterVolumeSpecName: "util") pod "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" (UID: "6464cb6c-1ad4-4eef-b492-4351e8fb8d3a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.540350 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-util\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.540391 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.540403 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4hvb\" (UniqueName: \"kubernetes.io/projected/6464cb6c-1ad4-4eef-b492-4351e8fb8d3a-kube-api-access-d4hvb\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.611791 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b"] Jan 21 09:30:23 crc kubenswrapper[5113]: W0121 09:30:23.621518 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52e0414b_6283_42ab_9e76_609f811f45c8.slice/crio-b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b WatchSource:0}: Error finding container b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b: Status 404 returned error can't find the container with id b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.821016 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:23 crc kubenswrapper[5113]: W0121 09:30:23.835924 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61b36320_1108_4c53_b36e_485342f03802.slice/crio-f5730640db9d348d5b1225966d5bc41699de55cf5030514a6de6f3552fa37e8d WatchSource:0}: Error finding container f5730640db9d348d5b1225966d5bc41699de55cf5030514a6de6f3552fa37e8d: Status 404 returned error can't find the container with id f5730640db9d348d5b1225966d5bc41699de55cf5030514a6de6f3552fa37e8d Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.850518 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerStarted","Data":"f5730640db9d348d5b1225966d5bc41699de55cf5030514a6de6f3552fa37e8d"} Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.851880 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerStarted","Data":"b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b"} Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.853605 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.853612 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w" event={"ID":"6464cb6c-1ad4-4eef-b492-4351e8fb8d3a","Type":"ContainerDied","Data":"3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5"} Jan 21 09:30:23 crc kubenswrapper[5113]: I0121 09:30:23.853646 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddecd45a79fa092075e65ac1c87c781a7dfdb14f92c62c339b68f7d4082a7d5" Jan 21 09:30:24 crc kubenswrapper[5113]: I0121 09:30:24.744780 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:24 crc kubenswrapper[5113]: I0121 09:30:24.744829 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:24 crc kubenswrapper[5113]: I0121 09:30:24.877615 5113 generic.go:358] "Generic (PLEG): container finished" podID="61b36320-1108-4c53-b36e-485342f03802" containerID="65b9cad96297075ee7c6c1b74687217afdff4ac25bf1fb430f2345f3d3e3f246" exitCode=0 Jan 21 09:30:24 crc kubenswrapper[5113]: I0121 09:30:24.877795 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerDied","Data":"65b9cad96297075ee7c6c1b74687217afdff4ac25bf1fb430f2345f3d3e3f246"} Jan 21 09:30:24 crc kubenswrapper[5113]: I0121 09:30:24.893154 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerStarted","Data":"2a6b9cba5d0d2a534bda11b4eb16c979bd2ee37c6d4ac2f1dd7179ee1aa0d600"} Jan 21 09:30:25 crc kubenswrapper[5113]: I0121 09:30:25.797952 5113 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8qcgd" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" probeResult="failure" output=< Jan 21 09:30:25 crc kubenswrapper[5113]: timeout: failed to connect service ":50051" within 1s Jan 21 09:30:25 crc kubenswrapper[5113]: > Jan 21 09:30:25 crc kubenswrapper[5113]: I0121 09:30:25.899682 5113 generic.go:358] "Generic (PLEG): container finished" podID="52e0414b-6283-42ab-9e76-609f811f45c8" containerID="2a6b9cba5d0d2a534bda11b4eb16c979bd2ee37c6d4ac2f1dd7179ee1aa0d600" exitCode=0 Jan 21 09:30:25 crc kubenswrapper[5113]: I0121 09:30:25.899773 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerDied","Data":"2a6b9cba5d0d2a534bda11b4eb16c979bd2ee37c6d4ac2f1dd7179ee1aa0d600"} Jan 21 09:30:25 crc kubenswrapper[5113]: I0121 09:30:25.901985 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerStarted","Data":"1b038c8b03f1f469f4eefb4aeca597166c3935c8faae53c0a04f2357bed3d6e7"} Jan 21 09:30:25 crc kubenswrapper[5113]: I0121 09:30:25.904012 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerStarted","Data":"b89c6a7aeae6bbe58d2f4e5f5ba0b66051b0eb59f56bcbcac0f808419727a803"} Jan 21 09:30:26 crc kubenswrapper[5113]: I0121 09:30:26.910449 5113 generic.go:358] "Generic (PLEG): container finished" podID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerID="1b038c8b03f1f469f4eefb4aeca597166c3935c8faae53c0a04f2357bed3d6e7" exitCode=0 Jan 21 09:30:26 crc kubenswrapper[5113]: I0121 09:30:26.910681 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerDied","Data":"1b038c8b03f1f469f4eefb4aeca597166c3935c8faae53c0a04f2357bed3d6e7"} Jan 21 09:30:26 crc kubenswrapper[5113]: I0121 09:30:26.913085 5113 generic.go:358] "Generic (PLEG): container finished" podID="61b36320-1108-4c53-b36e-485342f03802" containerID="b89c6a7aeae6bbe58d2f4e5f5ba0b66051b0eb59f56bcbcac0f808419727a803" exitCode=0 Jan 21 09:30:26 crc kubenswrapper[5113]: I0121 09:30:26.913281 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerDied","Data":"b89c6a7aeae6bbe58d2f4e5f5ba0b66051b0eb59f56bcbcac0f808419727a803"} Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913091 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw"] Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913801 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="pull" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913817 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="pull" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913840 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="extract" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913848 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="extract" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913868 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="util" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913876 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="util" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.913976 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="6464cb6c-1ad4-4eef-b492-4351e8fb8d3a" containerName="extract" Jan 21 09:30:27 crc kubenswrapper[5113]: I0121 09:30:27.936939 5113 generic.go:358] "Generic (PLEG): container finished" podID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerID="e97ca7b54bc416e64efe0eae6b62d37a995766ca3d3d96f4d6ae6026ea772b29" exitCode=0 Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.339804 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.339896 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.559135 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw"] Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.559182 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4"] Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.559497 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.561375 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.561406 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.562906 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-d299s\"" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.580441 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-phvjc" podStartSLOduration=5.758557373 podStartE2EDuration="6.580422645s" podCreationTimestamp="2026-01-21 09:30:22 +0000 UTC" firstStartedPulling="2026-01-21 09:30:24.878559814 +0000 UTC m=+754.379386863" lastFinishedPulling="2026-01-21 09:30:25.700425066 +0000 UTC m=+755.201252135" observedRunningTime="2026-01-21 09:30:28.577690506 +0000 UTC m=+758.078517565" watchObservedRunningTime="2026-01-21 09:30:28.580422645 +0000 UTC m=+758.081249694" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.603470 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4n65\" (UniqueName: \"kubernetes.io/projected/7944be2a-7f45-495b-90e5-b31570149a43-kube-api-access-t4n65\") pod \"obo-prometheus-operator-9bc85b4bf-vpdxw\" (UID: \"7944be2a-7f45-495b-90e5-b31570149a43\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.705330 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4n65\" (UniqueName: \"kubernetes.io/projected/7944be2a-7f45-495b-90e5-b31570149a43-kube-api-access-t4n65\") pod \"obo-prometheus-operator-9bc85b4bf-vpdxw\" (UID: \"7944be2a-7f45-495b-90e5-b31570149a43\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.729754 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4n65\" (UniqueName: \"kubernetes.io/projected/7944be2a-7f45-495b-90e5-b31570149a43-kube-api-access-t4n65\") pod \"obo-prometheus-operator-9bc85b4bf-vpdxw\" (UID: \"7944be2a-7f45-495b-90e5-b31570149a43\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" Jan 21 09:30:28 crc kubenswrapper[5113]: I0121 09:30:28.875821 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.077803 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.082689 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-pj56h\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.082972 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.083770 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerDied","Data":"e97ca7b54bc416e64efe0eae6b62d37a995766ca3d3d96f4d6ae6026ea772b29"} Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.085764 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.085942 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerStarted","Data":"a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6"} Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.087095 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.112419 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.112787 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.169597 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.169634 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5jl7x"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.169817 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.175247 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5jl7x"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.175268 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-mfk8j"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.175945 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.179928 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-mfk8j"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.179980 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.180107 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.181622 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.181958 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-8sp7w\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.182053 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-g22g6\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214543 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214594 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214614 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbb9t\" (UniqueName: \"kubernetes.io/projected/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-kube-api-access-qbb9t\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214643 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nfbt\" (UniqueName: \"kubernetes.io/projected/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-kube-api-access-6nfbt\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214748 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-observability-operator-tls\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.214935 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.215018 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.215067 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.234484 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.253452 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3843f0f9-ae8b-4934-a635-75e80ae8379d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4\" (UID: \"3843f0f9-ae8b-4934-a635-75e80ae8379d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.300246 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317507 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317535 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317567 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qbb9t\" (UniqueName: \"kubernetes.io/projected/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-kube-api-access-qbb9t\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317591 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nfbt\" (UniqueName: \"kubernetes.io/projected/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-kube-api-access-6nfbt\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.317607 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-observability-operator-tls\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.318718 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.321151 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.321212 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-observability-operator-tls\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.324049 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/83b782b2-ea6f-4d32-a56b-7c8ad0c39688-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp\" (UID: \"83b782b2-ea6f-4d32-a56b-7c8ad0c39688\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.334990 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nfbt\" (UniqueName: \"kubernetes.io/projected/1ac4f96e-4018-4e2d-8a80-2eff7c26c08e-kube-api-access-6nfbt\") pod \"observability-operator-85c68dddb-5jl7x\" (UID: \"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e\") " pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.345283 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbb9t\" (UniqueName: \"kubernetes.io/projected/a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e-kube-api-access-qbb9t\") pod \"perses-operator-669c9f96b5-mfk8j\" (UID: \"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e\") " pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.402113 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.418542 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util\") pod \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.418750 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ds8q\" (UniqueName: \"kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q\") pod \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.418772 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle\") pod \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\" (UID: \"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe\") " Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.419280 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle" (OuterVolumeSpecName: "bundle") pod "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" (UID: "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.423872 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q" (OuterVolumeSpecName: "kube-api-access-9ds8q") pod "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" (UID: "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe"). InnerVolumeSpecName "kube-api-access-9ds8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.428378 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util" (OuterVolumeSpecName: "util") pod "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" (UID: "95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.511038 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.522797 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.523271 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-util\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.523306 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ds8q\" (UniqueName: \"kubernetes.io/projected/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-kube-api-access-9ds8q\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.523315 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.573811 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.876610 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-5jl7x"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.957886 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4"] Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.962155 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" event={"ID":"95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe","Type":"ContainerDied","Data":"afb8722989d1ade038395372acec9c71132e793bcdefd24b364eef27faa40d27"} Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.962203 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb8722989d1ade038395372acec9c71132e793bcdefd24b364eef27faa40d27" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.962319 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw" Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.971784 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" event={"ID":"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e","Type":"ContainerStarted","Data":"2482373c673581596e3f0183b3d0e894bc4454072d40c9c74eaac45528742c70"} Jan 21 09:30:29 crc kubenswrapper[5113]: I0121 09:30:29.979414 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" event={"ID":"7944be2a-7f45-495b-90e5-b31570149a43","Type":"ContainerStarted","Data":"3261db2a446043913f986f6f3be9f7df8778350d85ca19e8c2b1885567281f5c"} Jan 21 09:30:30 crc kubenswrapper[5113]: I0121 09:30:30.040288 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp"] Jan 21 09:30:30 crc kubenswrapper[5113]: W0121 09:30:30.356158 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda33c9424_1bbb_4a0c_9fb2_fd5eb6de667e.slice/crio-744a9ada98b311c6958295839581304d8fa3272cb04bb20c56df5dba0106789d WatchSource:0}: Error finding container 744a9ada98b311c6958295839581304d8fa3272cb04bb20c56df5dba0106789d: Status 404 returned error can't find the container with id 744a9ada98b311c6958295839581304d8fa3272cb04bb20c56df5dba0106789d Jan 21 09:30:30 crc kubenswrapper[5113]: I0121 09:30:30.364228 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-mfk8j"] Jan 21 09:30:30 crc kubenswrapper[5113]: I0121 09:30:30.987240 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" event={"ID":"83b782b2-ea6f-4d32-a56b-7c8ad0c39688","Type":"ContainerStarted","Data":"3dbd713ad5243dd9db84f966ea4b411debafd75a262094f10a6427b8807f997e"} Jan 21 09:30:30 crc kubenswrapper[5113]: I0121 09:30:30.988507 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" event={"ID":"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e","Type":"ContainerStarted","Data":"744a9ada98b311c6958295839581304d8fa3272cb04bb20c56df5dba0106789d"} Jan 21 09:30:30 crc kubenswrapper[5113]: I0121 09:30:30.990031 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" event={"ID":"3843f0f9-ae8b-4934-a635-75e80ae8379d","Type":"ContainerStarted","Data":"b232a219985585b288a17b318b61d75a9f1c7683f959954f3be66ad0470e967d"} Jan 21 09:30:32 crc kubenswrapper[5113]: I0121 09:30:32.935310 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:32 crc kubenswrapper[5113]: I0121 09:30:32.935722 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.061591 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.118368 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-prrkk"] Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119129 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="extract" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119149 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="extract" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119162 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="pull" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119169 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="pull" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119200 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="util" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119208 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="util" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.119326 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe" containerName="extract" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.126464 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.129390 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.129428 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.129560 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-kjbl8\"" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.154885 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-prrkk"] Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.187076 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rrb\" (UniqueName: \"kubernetes.io/projected/7c1cc988-c5a8-4ee1-a41b-1fd925a848dc-kube-api-access-77rrb\") pod \"interconnect-operator-78b9bd8798-prrkk\" (UID: \"7c1cc988-c5a8-4ee1-a41b-1fd925a848dc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.289593 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77rrb\" (UniqueName: \"kubernetes.io/projected/7c1cc988-c5a8-4ee1-a41b-1fd925a848dc-kube-api-access-77rrb\") pod \"interconnect-operator-78b9bd8798-prrkk\" (UID: \"7c1cc988-c5a8-4ee1-a41b-1fd925a848dc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.346897 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rrb\" (UniqueName: \"kubernetes.io/projected/7c1cc988-c5a8-4ee1-a41b-1fd925a848dc-kube-api-access-77rrb\") pod \"interconnect-operator-78b9bd8798-prrkk\" (UID: \"7c1cc988-c5a8-4ee1-a41b-1fd925a848dc\") " pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" Jan 21 09:30:33 crc kubenswrapper[5113]: I0121 09:30:33.455211 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" Jan 21 09:30:34 crc kubenswrapper[5113]: I0121 09:30:34.075948 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:34 crc kubenswrapper[5113]: I0121 09:30:34.095367 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-prrkk"] Jan 21 09:30:34 crc kubenswrapper[5113]: I0121 09:30:34.810110 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:34 crc kubenswrapper[5113]: I0121 09:30:34.869654 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.064315 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" event={"ID":"7c1cc988-c5a8-4ee1-a41b-1fd925a848dc","Type":"ContainerStarted","Data":"3938e3a60d18ee77461a3593da40c5d68bcfb8e145aff53874ac12828cf73112"} Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.975654 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-578f8f8d6c-f524v"] Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.988083 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-578f8f8d6c-f524v"] Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.988231 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.993111 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 21 09:30:35 crc kubenswrapper[5113]: I0121 09:30:35.993298 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-bq8xd\"" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.142693 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-webhook-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.142953 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-apiservice-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.143036 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2g8z\" (UniqueName: \"kubernetes.io/projected/019809be-ccc7-49df-89f9-84eff425459d-kube-api-access-x2g8z\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.244471 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-webhook-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.244566 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-apiservice-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.244611 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x2g8z\" (UniqueName: \"kubernetes.io/projected/019809be-ccc7-49df-89f9-84eff425459d-kube-api-access-x2g8z\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.255295 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-apiservice-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.258088 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/019809be-ccc7-49df-89f9-84eff425459d-webhook-cert\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.262671 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2g8z\" (UniqueName: \"kubernetes.io/projected/019809be-ccc7-49df-89f9-84eff425459d-kube-api-access-x2g8z\") pod \"elastic-operator-578f8f8d6c-f524v\" (UID: \"019809be-ccc7-49df-89f9-84eff425459d\") " pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:36 crc kubenswrapper[5113]: I0121 09:30:36.309556 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" Jan 21 09:30:37 crc kubenswrapper[5113]: I0121 09:30:37.395385 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:37 crc kubenswrapper[5113]: I0121 09:30:37.395655 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-phvjc" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="registry-server" containerID="cri-o://a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" gracePeriod=2 Jan 21 09:30:38 crc kubenswrapper[5113]: I0121 09:30:38.097266 5113 generic.go:358] "Generic (PLEG): container finished" podID="61b36320-1108-4c53-b36e-485342f03802" containerID="a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" exitCode=0 Jan 21 09:30:38 crc kubenswrapper[5113]: I0121 09:30:38.097326 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerDied","Data":"a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6"} Jan 21 09:30:39 crc kubenswrapper[5113]: I0121 09:30:39.394256 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:39 crc kubenswrapper[5113]: I0121 09:30:39.394533 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8qcgd" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" containerID="cri-o://8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" gracePeriod=2 Jan 21 09:30:40 crc kubenswrapper[5113]: I0121 09:30:40.113610 5113 generic.go:358] "Generic (PLEG): container finished" podID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerID="8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" exitCode=0 Jan 21 09:30:40 crc kubenswrapper[5113]: I0121 09:30:40.113692 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerDied","Data":"8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315"} Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.033403 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6 is running failed: container process not found" containerID="a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.034561 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6 is running failed: container process not found" containerID="a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.034993 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6 is running failed: container process not found" containerID="a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.035102 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-phvjc" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="registry-server" probeResult="unknown" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.590457 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.760832 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities\") pod \"61b36320-1108-4c53-b36e-485342f03802\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.761097 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdpwk\" (UniqueName: \"kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk\") pod \"61b36320-1108-4c53-b36e-485342f03802\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.761113 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content\") pod \"61b36320-1108-4c53-b36e-485342f03802\" (UID: \"61b36320-1108-4c53-b36e-485342f03802\") " Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.762001 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities" (OuterVolumeSpecName: "utilities") pod "61b36320-1108-4c53-b36e-485342f03802" (UID: "61b36320-1108-4c53-b36e-485342f03802"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.776119 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk" (OuterVolumeSpecName: "kube-api-access-qdpwk") pod "61b36320-1108-4c53-b36e-485342f03802" (UID: "61b36320-1108-4c53-b36e-485342f03802"). InnerVolumeSpecName "kube-api-access-qdpwk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.800075 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61b36320-1108-4c53-b36e-485342f03802" (UID: "61b36320-1108-4c53-b36e-485342f03802"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.811479 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315 is running failed: container process not found" containerID="8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.811883 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315 is running failed: container process not found" containerID="8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.812101 5113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315 is running failed: container process not found" containerID="8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 09:30:44 crc kubenswrapper[5113]: E0121 09:30:44.812133 5113 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8qcgd" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" probeResult="unknown" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.862305 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.862341 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdpwk\" (UniqueName: \"kubernetes.io/projected/61b36320-1108-4c53-b36e-485342f03802-kube-api-access-qdpwk\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:44 crc kubenswrapper[5113]: I0121 09:30:44.862352 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61b36320-1108-4c53-b36e-485342f03802-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:45 crc kubenswrapper[5113]: I0121 09:30:45.152006 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-phvjc" Jan 21 09:30:45 crc kubenswrapper[5113]: I0121 09:30:45.152007 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-phvjc" event={"ID":"61b36320-1108-4c53-b36e-485342f03802","Type":"ContainerDied","Data":"f5730640db9d348d5b1225966d5bc41699de55cf5030514a6de6f3552fa37e8d"} Jan 21 09:30:45 crc kubenswrapper[5113]: I0121 09:30:45.152128 5113 scope.go:117] "RemoveContainer" containerID="a3ee4c73e6c550ee0ca29e228a8813fde3f7265c4e1e6a7a94392b736783a8d6" Jan 21 09:30:45 crc kubenswrapper[5113]: I0121 09:30:45.170154 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:45 crc kubenswrapper[5113]: I0121 09:30:45.174594 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-phvjc"] Jan 21 09:30:46 crc kubenswrapper[5113]: I0121 09:30:46.853188 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61b36320-1108-4c53-b36e-485342f03802" path="/var/lib/kubelet/pods/61b36320-1108-4c53-b36e-485342f03802/volumes" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.045871 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.171316 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8qcgd" event={"ID":"5a48d508-39d9-4ba5-bc97-1355f781b5b2","Type":"ContainerDied","Data":"45535633d56219d80ddc51bb89a7e7e3c173d9faa6090cc9b1c82c9b06d5bb40"} Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.171417 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8qcgd" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.213290 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxt5h\" (UniqueName: \"kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h\") pod \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.213345 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities\") pod \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.213378 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content\") pod \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\" (UID: \"5a48d508-39d9-4ba5-bc97-1355f781b5b2\") " Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.214457 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities" (OuterVolumeSpecName: "utilities") pod "5a48d508-39d9-4ba5-bc97-1355f781b5b2" (UID: "5a48d508-39d9-4ba5-bc97-1355f781b5b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.228547 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h" (OuterVolumeSpecName: "kube-api-access-hxt5h") pod "5a48d508-39d9-4ba5-bc97-1355f781b5b2" (UID: "5a48d508-39d9-4ba5-bc97-1355f781b5b2"). InnerVolumeSpecName "kube-api-access-hxt5h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.314406 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxt5h\" (UniqueName: \"kubernetes.io/projected/5a48d508-39d9-4ba5-bc97-1355f781b5b2-kube-api-access-hxt5h\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.314725 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.320648 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a48d508-39d9-4ba5-bc97-1355f781b5b2" (UID: "5a48d508-39d9-4ba5-bc97-1355f781b5b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.415955 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a48d508-39d9-4ba5-bc97-1355f781b5b2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.514354 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.522879 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8qcgd"] Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.662765 5113 scope.go:117] "RemoveContainer" containerID="b89c6a7aeae6bbe58d2f4e5f5ba0b66051b0eb59f56bcbcac0f808419727a803" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.772898 5113 scope.go:117] "RemoveContainer" containerID="65b9cad96297075ee7c6c1b74687217afdff4ac25bf1fb430f2345f3d3e3f246" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.845795 5113 scope.go:117] "RemoveContainer" containerID="8ea082b667c88244005980c86e771b31f4ff40bb7cb05e564744133924c5b315" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.862895 5113 scope.go:117] "RemoveContainer" containerID="5c90b4924c242fcfd92ef487c83373ec5d9e713bdcc3f831886a70bfebbf5c7b" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.874616 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" path="/var/lib/kubelet/pods/5a48d508-39d9-4ba5-bc97-1355f781b5b2/volumes" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.940930 5113 scope.go:117] "RemoveContainer" containerID="208e1cd34110471fbf0e4ce309d2fd9557a9b092aa86bee2d19c21d11974ba84" Jan 21 09:30:48 crc kubenswrapper[5113]: I0121 09:30:48.944213 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-578f8f8d6c-f524v"] Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.178136 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" event={"ID":"83b782b2-ea6f-4d32-a56b-7c8ad0c39688","Type":"ContainerStarted","Data":"6a0a0d84e6c7ef8347d5865e6cdceeb78bc8f1082b0597516b13d7bb69ed4dd3"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.179451 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" event={"ID":"1ac4f96e-4018-4e2d-8a80-2eff7c26c08e","Type":"ContainerStarted","Data":"28391aab45e02186fe2515ab5683c10fc87bd38d5de67afc76afcab58459e356"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.179710 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.183534 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" event={"ID":"7944be2a-7f45-495b-90e5-b31570149a43","Type":"ContainerStarted","Data":"a02509d488979e69b9700281db2397cce9e344a9a60a4dac74e878031e2ce6fa"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.185296 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" event={"ID":"a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e","Type":"ContainerStarted","Data":"edb05bce47ea16fc7434ab20f5653f976f7c3602b88e1f28742c42d709799010"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.185399 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.187886 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" event={"ID":"019809be-ccc7-49df-89f9-84eff425459d","Type":"ContainerStarted","Data":"db07b8f0a54ecf88808b1dd915d9a976b764c182f35dda0dca0c3ac8446e9c5a"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.188994 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" event={"ID":"3843f0f9-ae8b-4934-a635-75e80ae8379d","Type":"ContainerStarted","Data":"3c166753b6ed12ab16cb0d7c8b9d770085779dbe5946f1383ab1339fbc87b727"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.190358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" event={"ID":"7c1cc988-c5a8-4ee1-a41b-1fd925a848dc","Type":"ContainerStarted","Data":"b53ef7f2e3b597620dc9ac0f87c145b6c4f09dea5d1678d34d0f4bdb904f3c28"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.191970 5113 generic.go:358] "Generic (PLEG): container finished" podID="52e0414b-6283-42ab-9e76-609f811f45c8" containerID="3dd369b0f4c10b35ca886859f53da6464ed6585d603840e2a0ff537cdeebb455" exitCode=0 Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.192003 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerDied","Data":"3dd369b0f4c10b35ca886859f53da6464ed6585d603840e2a0ff537cdeebb455"} Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.210180 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.212290 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp" podStartSLOduration=2.5977996020000003 podStartE2EDuration="21.212274586s" podCreationTimestamp="2026-01-21 09:30:28 +0000 UTC" firstStartedPulling="2026-01-21 09:30:30.063601971 +0000 UTC m=+759.564429020" lastFinishedPulling="2026-01-21 09:30:48.678076955 +0000 UTC m=+778.178904004" observedRunningTime="2026-01-21 09:30:49.211140943 +0000 UTC m=+778.711967992" watchObservedRunningTime="2026-01-21 09:30:49.212274586 +0000 UTC m=+778.713101635" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.278209 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-prrkk" podStartSLOduration=1.598250641 podStartE2EDuration="16.278191265s" podCreationTimestamp="2026-01-21 09:30:33 +0000 UTC" firstStartedPulling="2026-01-21 09:30:34.132144262 +0000 UTC m=+763.632971301" lastFinishedPulling="2026-01-21 09:30:48.812084876 +0000 UTC m=+778.312911925" observedRunningTime="2026-01-21 09:30:49.276172457 +0000 UTC m=+778.776999506" watchObservedRunningTime="2026-01-21 09:30:49.278191265 +0000 UTC m=+778.779018314" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.279500 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" podStartSLOduration=2.979195238 podStartE2EDuration="21.279493923s" podCreationTimestamp="2026-01-21 09:30:28 +0000 UTC" firstStartedPulling="2026-01-21 09:30:30.362416725 +0000 UTC m=+759.863243784" lastFinishedPulling="2026-01-21 09:30:48.66271542 +0000 UTC m=+778.163542469" observedRunningTime="2026-01-21 09:30:49.247692532 +0000 UTC m=+778.748519581" watchObservedRunningTime="2026-01-21 09:30:49.279493923 +0000 UTC m=+778.780320972" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.325566 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-5jl7x" podStartSLOduration=3.209698534 podStartE2EDuration="21.325542557s" podCreationTimestamp="2026-01-21 09:30:28 +0000 UTC" firstStartedPulling="2026-01-21 09:30:29.895343318 +0000 UTC m=+759.396170367" lastFinishedPulling="2026-01-21 09:30:48.011187341 +0000 UTC m=+777.512014390" observedRunningTime="2026-01-21 09:30:49.318037779 +0000 UTC m=+778.818864828" watchObservedRunningTime="2026-01-21 09:30:49.325542557 +0000 UTC m=+778.826369606" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.426969 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4" podStartSLOduration=3.395423473 podStartE2EDuration="21.426949674s" podCreationTimestamp="2026-01-21 09:30:28 +0000 UTC" firstStartedPulling="2026-01-21 09:30:29.97967826 +0000 UTC m=+759.480505309" lastFinishedPulling="2026-01-21 09:30:48.011204461 +0000 UTC m=+777.512031510" observedRunningTime="2026-01-21 09:30:49.418493219 +0000 UTC m=+778.919320268" watchObservedRunningTime="2026-01-21 09:30:49.426949674 +0000 UTC m=+778.927776723" Jan 21 09:30:49 crc kubenswrapper[5113]: I0121 09:30:49.475066 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-vpdxw" podStartSLOduration=2.991358898 podStartE2EDuration="22.475043776s" podCreationTimestamp="2026-01-21 09:30:27 +0000 UTC" firstStartedPulling="2026-01-21 09:30:29.179032752 +0000 UTC m=+758.679859801" lastFinishedPulling="2026-01-21 09:30:48.66271763 +0000 UTC m=+778.163544679" observedRunningTime="2026-01-21 09:30:49.469249329 +0000 UTC m=+778.970076378" watchObservedRunningTime="2026-01-21 09:30:49.475043776 +0000 UTC m=+778.975870825" Jan 21 09:30:50 crc kubenswrapper[5113]: I0121 09:30:50.206571 5113 generic.go:358] "Generic (PLEG): container finished" podID="52e0414b-6283-42ab-9e76-609f811f45c8" containerID="3192cb57dbcb4f9521aa7bf9beb66ccbb802e9619717efec92a0d9d0944f7afa" exitCode=0 Jan 21 09:30:50 crc kubenswrapper[5113]: I0121 09:30:50.206686 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerDied","Data":"3192cb57dbcb4f9521aa7bf9beb66ccbb802e9619717efec92a0d9d0944f7afa"} Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.371160 5113 scope.go:117] "RemoveContainer" containerID="cdca3673f2e37f84a3b4b47d9ed6475d580ad0ee5a65c8594f48ee96506caed4" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.536141 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.578056 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util\") pod \"52e0414b-6283-42ab-9e76-609f811f45c8\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.578399 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle\") pod \"52e0414b-6283-42ab-9e76-609f811f45c8\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.578523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcstq\" (UniqueName: \"kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq\") pod \"52e0414b-6283-42ab-9e76-609f811f45c8\" (UID: \"52e0414b-6283-42ab-9e76-609f811f45c8\") " Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.579393 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle" (OuterVolumeSpecName: "bundle") pod "52e0414b-6283-42ab-9e76-609f811f45c8" (UID: "52e0414b-6283-42ab-9e76-609f811f45c8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.593419 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq" (OuterVolumeSpecName: "kube-api-access-jcstq") pod "52e0414b-6283-42ab-9e76-609f811f45c8" (UID: "52e0414b-6283-42ab-9e76-609f811f45c8"). InnerVolumeSpecName "kube-api-access-jcstq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.604474 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util" (OuterVolumeSpecName: "util") pod "52e0414b-6283-42ab-9e76-609f811f45c8" (UID: "52e0414b-6283-42ab-9e76-609f811f45c8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.680179 5113 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-util\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.680216 5113 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52e0414b-6283-42ab-9e76-609f811f45c8-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:51 crc kubenswrapper[5113]: I0121 09:30:51.680225 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jcstq\" (UniqueName: \"kubernetes.io/projected/52e0414b-6283-42ab-9e76-609f811f45c8-kube-api-access-jcstq\") on node \"crc\" DevicePath \"\"" Jan 21 09:30:52 crc kubenswrapper[5113]: I0121 09:30:52.227327 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" Jan 21 09:30:52 crc kubenswrapper[5113]: I0121 09:30:52.227490 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b" event={"ID":"52e0414b-6283-42ab-9e76-609f811f45c8","Type":"ContainerDied","Data":"b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b"} Jan 21 09:30:52 crc kubenswrapper[5113]: I0121 09:30:52.227537 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a8c92147959a1ee6afb24864c882b166e4b98b239053e2eefaaad0c09b3d5b" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.257909 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" event={"ID":"019809be-ccc7-49df-89f9-84eff425459d","Type":"ContainerStarted","Data":"778afd69c4bfa05f6070d9c6554d96646f4ed9f5c9372e5320c319e2f9757bfa"} Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.280332 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-578f8f8d6c-f524v" podStartSLOduration=15.211602138 podStartE2EDuration="19.280311984s" podCreationTimestamp="2026-01-21 09:30:35 +0000 UTC" firstStartedPulling="2026-01-21 09:30:49.04182501 +0000 UTC m=+778.542652059" lastFinishedPulling="2026-01-21 09:30:53.110534866 +0000 UTC m=+782.611361905" observedRunningTime="2026-01-21 09:30:54.276195724 +0000 UTC m=+783.777022813" watchObservedRunningTime="2026-01-21 09:30:54.280311984 +0000 UTC m=+783.781139073" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.618334 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619113 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="extract" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619140 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="extract" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619160 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="extract-utilities" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619169 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="extract-utilities" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619180 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619188 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619216 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="extract-content" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619224 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="extract-content" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619234 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="extract-utilities" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619241 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="extract-utilities" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619253 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="pull" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619261 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="pull" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619279 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619287 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619297 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="util" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619305 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="util" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619316 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="extract-content" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619323 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="extract-content" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619426 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="61b36320-1108-4c53-b36e-485342f03802" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619440 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="52e0414b-6283-42ab-9e76-609f811f45c8" containerName="extract" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.619454 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a48d508-39d9-4ba5-bc97-1355f781b5b2" containerName="registry-server" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.655451 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.655588 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.658665 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.659141 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.659430 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.659779 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.659914 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.659999 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.660168 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-dbdtj\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.660212 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.660397 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724099 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724142 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724216 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/961a978d-fbd8-415d-a41f-b80b9693e721-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724273 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724323 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724357 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724376 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724398 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724421 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724449 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724499 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724520 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724575 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.724633 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826105 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826152 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826184 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826215 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826232 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826252 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826286 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826309 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/961a978d-fbd8-415d-a41f-b80b9693e721-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826329 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826349 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826369 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826409 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826428 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826450 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826472 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826792 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.826859 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.827091 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.827421 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.827490 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.827505 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.827536 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.828470 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.835531 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.835602 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/961a978d-fbd8-415d-a41f-b80b9693e721-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.835650 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.835685 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.844161 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.845469 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.848572 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/961a978d-fbd8-415d-a41f-b80b9693e721-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"961a978d-fbd8-415d-a41f-b80b9693e721\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:54 crc kubenswrapper[5113]: I0121 09:30:54.970281 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:30:55 crc kubenswrapper[5113]: I0121 09:30:55.204536 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 09:30:55 crc kubenswrapper[5113]: W0121 09:30:55.215847 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod961a978d_fbd8_415d_a41f_b80b9693e721.slice/crio-a4b5bdbd066ad3df2acf28304bae256ae6ccdcc224976b9ab501b60cb9079092 WatchSource:0}: Error finding container a4b5bdbd066ad3df2acf28304bae256ae6ccdcc224976b9ab501b60cb9079092: Status 404 returned error can't find the container with id a4b5bdbd066ad3df2acf28304bae256ae6ccdcc224976b9ab501b60cb9079092 Jan 21 09:30:55 crc kubenswrapper[5113]: I0121 09:30:55.264059 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"961a978d-fbd8-415d-a41f-b80b9693e721","Type":"ContainerStarted","Data":"a4b5bdbd066ad3df2acf28304bae256ae6ccdcc224976b9ab501b60cb9079092"} Jan 21 09:30:58 crc kubenswrapper[5113]: I0121 09:30:58.339496 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:30:58 crc kubenswrapper[5113]: I0121 09:30:58.339865 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:31:00 crc kubenswrapper[5113]: I0121 09:31:00.209582 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-mfk8j" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.686106 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987"] Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.797225 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987"] Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.797393 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.803684 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.803684 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-97728\"" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.809618 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.864541 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18490327-e218-4730-ba06-61ff941cd2f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.864807 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s69rz\" (UniqueName: \"kubernetes.io/projected/18490327-e218-4730-ba06-61ff941cd2f1-kube-api-access-s69rz\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.966668 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18490327-e218-4730-ba06-61ff941cd2f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.966806 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s69rz\" (UniqueName: \"kubernetes.io/projected/18490327-e218-4730-ba06-61ff941cd2f1-kube-api-access-s69rz\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:04 crc kubenswrapper[5113]: I0121 09:31:04.967693 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/18490327-e218-4730-ba06-61ff941cd2f1-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:05 crc kubenswrapper[5113]: I0121 09:31:05.001853 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s69rz\" (UniqueName: \"kubernetes.io/projected/18490327-e218-4730-ba06-61ff941cd2f1-kube-api-access-s69rz\") pod \"cert-manager-operator-controller-manager-64c74584c4-zs987\" (UID: \"18490327-e218-4730-ba06-61ff941cd2f1\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:05 crc kubenswrapper[5113]: I0121 09:31:05.118420 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" Jan 21 09:31:15 crc kubenswrapper[5113]: I0121 09:31:15.606055 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987"] Jan 21 09:31:15 crc kubenswrapper[5113]: W0121 09:31:15.611862 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18490327_e218_4730_ba06_61ff941cd2f1.slice/crio-149870f27fb0a5b3a03c559884fb563c3f4f15312e2e4fd1e22407667ccaae10 WatchSource:0}: Error finding container 149870f27fb0a5b3a03c559884fb563c3f4f15312e2e4fd1e22407667ccaae10: Status 404 returned error can't find the container with id 149870f27fb0a5b3a03c559884fb563c3f4f15312e2e4fd1e22407667ccaae10 Jan 21 09:31:16 crc kubenswrapper[5113]: I0121 09:31:16.383689 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"961a978d-fbd8-415d-a41f-b80b9693e721","Type":"ContainerStarted","Data":"8f8fe68cba9d035d69b0e18e3d77fbb8f55a2e7e71f716fb1aca1fdd398d0639"} Jan 21 09:31:16 crc kubenswrapper[5113]: I0121 09:31:16.384818 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" event={"ID":"18490327-e218-4730-ba06-61ff941cd2f1","Type":"ContainerStarted","Data":"149870f27fb0a5b3a03c559884fb563c3f4f15312e2e4fd1e22407667ccaae10"} Jan 21 09:31:16 crc kubenswrapper[5113]: I0121 09:31:16.529701 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 09:31:16 crc kubenswrapper[5113]: I0121 09:31:16.560958 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 21 09:31:18 crc kubenswrapper[5113]: I0121 09:31:18.405974 5113 generic.go:358] "Generic (PLEG): container finished" podID="961a978d-fbd8-415d-a41f-b80b9693e721" containerID="8f8fe68cba9d035d69b0e18e3d77fbb8f55a2e7e71f716fb1aca1fdd398d0639" exitCode=0 Jan 21 09:31:18 crc kubenswrapper[5113]: I0121 09:31:18.406148 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"961a978d-fbd8-415d-a41f-b80b9693e721","Type":"ContainerDied","Data":"8f8fe68cba9d035d69b0e18e3d77fbb8f55a2e7e71f716fb1aca1fdd398d0639"} Jan 21 09:31:22 crc kubenswrapper[5113]: I0121 09:31:22.439802 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" event={"ID":"18490327-e218-4730-ba06-61ff941cd2f1","Type":"ContainerStarted","Data":"6574442e35f55a3b6c3c83d2d01749e04bf080b6e556c15bc5a4e03ba7448540"} Jan 21 09:31:22 crc kubenswrapper[5113]: I0121 09:31:22.442435 5113 generic.go:358] "Generic (PLEG): container finished" podID="961a978d-fbd8-415d-a41f-b80b9693e721" containerID="fb426af5c09e622d755f9d3b4bcf534e194fc9ea4d858526c20083b8455158e9" exitCode=0 Jan 21 09:31:22 crc kubenswrapper[5113]: I0121 09:31:22.442550 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"961a978d-fbd8-415d-a41f-b80b9693e721","Type":"ContainerDied","Data":"fb426af5c09e622d755f9d3b4bcf534e194fc9ea4d858526c20083b8455158e9"} Jan 21 09:31:22 crc kubenswrapper[5113]: I0121 09:31:22.477087 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-zs987" podStartSLOduration=12.659420266 podStartE2EDuration="18.477067824s" podCreationTimestamp="2026-01-21 09:31:04 +0000 UTC" firstStartedPulling="2026-01-21 09:31:15.614228406 +0000 UTC m=+805.115055465" lastFinishedPulling="2026-01-21 09:31:21.431875974 +0000 UTC m=+810.932703023" observedRunningTime="2026-01-21 09:31:22.470538125 +0000 UTC m=+811.971365184" watchObservedRunningTime="2026-01-21 09:31:22.477067824 +0000 UTC m=+811.977894883" Jan 21 09:31:23 crc kubenswrapper[5113]: I0121 09:31:23.454814 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"961a978d-fbd8-415d-a41f-b80b9693e721","Type":"ContainerStarted","Data":"60617a3d1a69241ecfda573a4c760e88fbac583a3865c955ba76b4763a97be28"} Jan 21 09:31:23 crc kubenswrapper[5113]: I0121 09:31:23.455278 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:31:23 crc kubenswrapper[5113]: I0121 09:31:23.497621 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=9.434164613 podStartE2EDuration="29.49759389s" podCreationTimestamp="2026-01-21 09:30:54 +0000 UTC" firstStartedPulling="2026-01-21 09:30:55.235378684 +0000 UTC m=+784.736205753" lastFinishedPulling="2026-01-21 09:31:15.298807971 +0000 UTC m=+804.799635030" observedRunningTime="2026-01-21 09:31:23.495548031 +0000 UTC m=+812.996375160" watchObservedRunningTime="2026-01-21 09:31:23.49759389 +0000 UTC m=+812.998420979" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.535989 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf"] Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.541381 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.543171 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.543362 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-b8rbl\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.543537 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.547363 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf"] Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.617786 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.617888 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9f2b\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-kube-api-access-f9f2b\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.718778 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.718847 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f9f2b\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-kube-api-access-f9f2b\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.743357 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.748705 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.748752 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9f2b\" (UniqueName: \"kubernetes.io/projected/cb5149c7-b193-48df-903d-729ae193fca0-kube-api-access-f9f2b\") pod \"cert-manager-webhook-7894b5b9b4-jzwdf\" (UID: \"cb5149c7-b193-48df-903d-729ae193fca0\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.808601 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.808708 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.812104 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.812112 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.812582 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.814144 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.861507 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921068 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921116 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921144 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921220 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm8w8\" (UniqueName: \"kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921247 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921274 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921303 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921322 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921337 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921356 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921380 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:24 crc kubenswrapper[5113]: I0121 09:31:24.921408 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022348 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022695 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022723 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022762 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022789 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022816 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022846 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022894 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022921 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.022946 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.023003 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sm8w8\" (UniqueName: \"kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.023033 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.023618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.023675 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.023828 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024021 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024148 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024183 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024339 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024673 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.024841 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.028421 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.042506 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.046176 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm8w8\" (UniqueName: \"kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8\") pod \"service-telemetry-operator-1-build\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.124401 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.126116 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf"] Jan 21 09:31:25 crc kubenswrapper[5113]: W0121 09:31:25.127943 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb5149c7_b193_48df_903d_729ae193fca0.slice/crio-8f742c2a1264a4f8f9037ce5743dcc336b80383832c338395d776db2940928ad WatchSource:0}: Error finding container 8f742c2a1264a4f8f9037ce5743dcc336b80383832c338395d776db2940928ad: Status 404 returned error can't find the container with id 8f742c2a1264a4f8f9037ce5743dcc336b80383832c338395d776db2940928ad Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.474010 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" event={"ID":"cb5149c7-b193-48df-903d-729ae193fca0","Type":"ContainerStarted","Data":"8f742c2a1264a4f8f9037ce5743dcc336b80383832c338395d776db2940928ad"} Jan 21 09:31:25 crc kubenswrapper[5113]: I0121 09:31:25.545195 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:25 crc kubenswrapper[5113]: W0121 09:31:25.556933 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f10d3c_0d58_4df6_811a_b82a47c770a4.slice/crio-c52175cd51c9a0fa5fb457322de26e495bb2b2bc6ab2c9595f83643c7910290e WatchSource:0}: Error finding container c52175cd51c9a0fa5fb457322de26e495bb2b2bc6ab2c9595f83643c7910290e: Status 404 returned error can't find the container with id c52175cd51c9a0fa5fb457322de26e495bb2b2bc6ab2c9595f83643c7910290e Jan 21 09:31:26 crc kubenswrapper[5113]: I0121 09:31:26.485450 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"48f10d3c-0d58-4df6-811a-b82a47c770a4","Type":"ContainerStarted","Data":"c52175cd51c9a0fa5fb457322de26e495bb2b2bc6ab2c9595f83643c7910290e"} Jan 21 09:31:26 crc kubenswrapper[5113]: I0121 09:31:26.939629 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d"] Jan 21 09:31:26 crc kubenswrapper[5113]: I0121 09:31:26.944593 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:26 crc kubenswrapper[5113]: I0121 09:31:26.948791 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-zfbk2\"" Jan 21 09:31:26 crc kubenswrapper[5113]: I0121 09:31:26.951313 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d"] Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.057908 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.058306 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkwh\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-kube-api-access-vfkwh\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.159881 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.159946 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vfkwh\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-kube-api-access-vfkwh\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.180147 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfkwh\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-kube-api-access-vfkwh\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.197643 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bb23691-eea9-41a3-b66f-ecef43808bd5-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2g29d\" (UID: \"3bb23691-eea9-41a3-b66f-ecef43808bd5\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.268300 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" Jan 21 09:31:27 crc kubenswrapper[5113]: I0121 09:31:27.474472 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d"] Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.340411 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.340484 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.340539 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.341400 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.341462 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0" gracePeriod=600 Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.502583 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" event={"ID":"3bb23691-eea9-41a3-b66f-ecef43808bd5","Type":"ContainerStarted","Data":"6649983fa1d22c29216a5131e4734ae7aeddb4bd97b4757b5021680394860ebd"} Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.505239 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0" exitCode=0 Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.505311 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0"} Jan 21 09:31:28 crc kubenswrapper[5113]: I0121 09:31:28.505353 5113 scope.go:117] "RemoveContainer" containerID="d8dfd060598d2c2b1438ddeabfcbeb2ae3fad707ebd8779b6a758c6a6601e505" Jan 21 09:31:29 crc kubenswrapper[5113]: I0121 09:31:29.515059 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c"} Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.543374 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-qnm4h"] Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.611640 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-qnm4h"] Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.611814 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.615322 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-2hv7x\"" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.686073 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-bound-sa-token\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.686171 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ld8\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-kube-api-access-j2ld8\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.787558 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-bound-sa-token\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.787651 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ld8\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-kube-api-access-j2ld8\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.816715 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ld8\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-kube-api-access-j2ld8\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.817599 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/36704923-46ee-4c88-8aa3-d789474e6929-bound-sa-token\") pod \"cert-manager-858d87f86b-qnm4h\" (UID: \"36704923-46ee-4c88-8aa3-d789474e6929\") " pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:34 crc kubenswrapper[5113]: I0121 09:31:34.925975 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-qnm4h" Jan 21 09:31:35 crc kubenswrapper[5113]: I0121 09:31:35.175656 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:35 crc kubenswrapper[5113]: I0121 09:31:35.545365 5113 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="961a978d-fbd8-415d-a41f-b80b9693e721" containerName="elasticsearch" probeResult="failure" output=< Jan 21 09:31:35 crc kubenswrapper[5113]: {"timestamp": "2026-01-21T09:31:35+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 21 09:31:35 crc kubenswrapper[5113]: > Jan 21 09:31:37 crc kubenswrapper[5113]: I0121 09:31:37.242939 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.136387 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.136524 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.139124 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.139508 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.139820 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227713 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227816 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227846 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mpl\" (UniqueName: \"kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227900 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227927 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227947 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.227976 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.228005 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.228034 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.228058 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.228090 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.228116 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.329883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.329962 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mpl\" (UniqueName: \"kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330044 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330103 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330138 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330180 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330265 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330348 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330399 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330427 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330858 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330939 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.330985 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331115 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331173 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331232 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331270 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331568 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.331906 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.332158 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.332368 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.338351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.338626 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.360124 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mpl\" (UniqueName: \"kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl\") pod \"service-telemetry-operator-2-build\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:38 crc kubenswrapper[5113]: I0121 09:31:38.458579 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.357129 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-qnm4h"] Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.505994 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 21 09:31:39 crc kubenswrapper[5113]: W0121 09:31:39.531067 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf409c2dd_3176_4573_8c3c_4b8a4f3ebc9d.slice/crio-63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006 WatchSource:0}: Error finding container 63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006: Status 404 returned error can't find the container with id 63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006 Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.592695 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" event={"ID":"cb5149c7-b193-48df-903d-729ae193fca0","Type":"ContainerStarted","Data":"5fe0a0bf240554e69959cf7d41db1d4497f0653c2afe6691616c2f0191b1f402"} Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.593299 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.606076 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerStarted","Data":"63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006"} Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.618988 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-qnm4h" event={"ID":"36704923-46ee-4c88-8aa3-d789474e6929","Type":"ContainerStarted","Data":"b28ccb39a67f9890a3adb0393f6622b2ada1829f8b2da0f45191b39c0316c851"} Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.619034 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-qnm4h" event={"ID":"36704923-46ee-4c88-8aa3-d789474e6929","Type":"ContainerStarted","Data":"25a84f6a5d0e2f26890b9405f48d6b3539232a28b16535510ba967be6238eb96"} Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.619273 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" podStartSLOduration=1.6021915930000001 podStartE2EDuration="15.618726853s" podCreationTimestamp="2026-01-21 09:31:24 +0000 UTC" firstStartedPulling="2026-01-21 09:31:25.134325133 +0000 UTC m=+814.635152182" lastFinishedPulling="2026-01-21 09:31:39.150860393 +0000 UTC m=+828.651687442" observedRunningTime="2026-01-21 09:31:39.610621269 +0000 UTC m=+829.111448318" watchObservedRunningTime="2026-01-21 09:31:39.618726853 +0000 UTC m=+829.119553922" Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.627179 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" event={"ID":"3bb23691-eea9-41a3-b66f-ecef43808bd5","Type":"ContainerStarted","Data":"65e9a3cc9b7015211ca8d8a08fd7fbdbb6694e891ef2d3c43f21bbdece35e3bf"} Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.640777 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-qnm4h" podStartSLOduration=5.640762921 podStartE2EDuration="5.640762921s" podCreationTimestamp="2026-01-21 09:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:31:39.634409308 +0000 UTC m=+829.135236357" watchObservedRunningTime="2026-01-21 09:31:39.640762921 +0000 UTC m=+829.141589970" Jan 21 09:31:39 crc kubenswrapper[5113]: I0121 09:31:39.720044 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2g29d" podStartSLOduration=2.072820026 podStartE2EDuration="13.720025327s" podCreationTimestamp="2026-01-21 09:31:26 +0000 UTC" firstStartedPulling="2026-01-21 09:31:27.490270854 +0000 UTC m=+816.991097903" lastFinishedPulling="2026-01-21 09:31:39.137476145 +0000 UTC m=+828.638303204" observedRunningTime="2026-01-21 09:31:39.715347662 +0000 UTC m=+829.216174701" watchObservedRunningTime="2026-01-21 09:31:39.720025327 +0000 UTC m=+829.220852396" Jan 21 09:31:40 crc kubenswrapper[5113]: I0121 09:31:40.634567 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerStarted","Data":"bfeb72c42668b3fa063cc58fc9b3875712b84a1d7e3c2750e67be9c2c81e11f8"} Jan 21 09:31:40 crc kubenswrapper[5113]: I0121 09:31:40.636253 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"48f10d3c-0d58-4df6-811a-b82a47c770a4","Type":"ContainerStarted","Data":"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3"} Jan 21 09:31:40 crc kubenswrapper[5113]: I0121 09:31:40.636693 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="48f10d3c-0d58-4df6-811a-b82a47c770a4" containerName="manage-dockerfile" containerID="cri-o://145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3" gracePeriod=30 Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.134640 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_48f10d3c-0d58-4df6-811a-b82a47c770a4/manage-dockerfile/0.log" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.134717 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.167857 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247327 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm8w8\" (UniqueName: \"kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247380 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247428 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247450 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247509 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247536 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247566 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247601 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247615 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247656 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247686 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.247752 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs\") pod \"48f10d3c-0d58-4df6-811a-b82a47c770a4\" (UID: \"48f10d3c-0d58-4df6-811a-b82a47c770a4\") " Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.248715 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.251346 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.251400 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.251411 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.251778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.251819 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.252027 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.254704 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.257104 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.257557 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8" (OuterVolumeSpecName: "kube-api-access-sm8w8") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "kube-api-access-sm8w8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.259245 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.261936 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "48f10d3c-0d58-4df6-811a-b82a47c770a4" (UID: "48f10d3c-0d58-4df6-811a-b82a47c770a4"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349251 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349558 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349566 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349577 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349586 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349594 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349601 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sm8w8\" (UniqueName: \"kubernetes.io/projected/48f10d3c-0d58-4df6-811a-b82a47c770a4-kube-api-access-sm8w8\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349609 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/48f10d3c-0d58-4df6-811a-b82a47c770a4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349648 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/48f10d3c-0d58-4df6-811a-b82a47c770a4-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349657 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48f10d3c-0d58-4df6-811a-b82a47c770a4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349664 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.349674 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/48f10d3c-0d58-4df6-811a-b82a47c770a4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.642366 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_48f10d3c-0d58-4df6-811a-b82a47c770a4/manage-dockerfile/0.log" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.642403 5113 generic.go:358] "Generic (PLEG): container finished" podID="48f10d3c-0d58-4df6-811a-b82a47c770a4" containerID="145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3" exitCode=1 Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.643518 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.643689 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"48f10d3c-0d58-4df6-811a-b82a47c770a4","Type":"ContainerDied","Data":"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3"} Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.643888 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"48f10d3c-0d58-4df6-811a-b82a47c770a4","Type":"ContainerDied","Data":"c52175cd51c9a0fa5fb457322de26e495bb2b2bc6ab2c9595f83643c7910290e"} Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.643993 5113 scope.go:117] "RemoveContainer" containerID="145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.667685 5113 scope.go:117] "RemoveContainer" containerID="145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3" Jan 21 09:31:41 crc kubenswrapper[5113]: E0121 09:31:41.668278 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3\": container with ID starting with 145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3 not found: ID does not exist" containerID="145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.668318 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3"} err="failed to get container status \"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3\": rpc error: code = NotFound desc = could not find container \"145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3\": container with ID starting with 145050d65a4e57ae3e1646fa019d82af5da237a6a4e7deb5b8f6cbe6fa42d5f3 not found: ID does not exist" Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.677496 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:41 crc kubenswrapper[5113]: I0121 09:31:41.682702 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 21 09:31:42 crc kubenswrapper[5113]: I0121 09:31:42.850391 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f10d3c-0d58-4df6-811a-b82a47c770a4" path="/var/lib/kubelet/pods/48f10d3c-0d58-4df6-811a-b82a47c770a4/volumes" Jan 21 09:31:45 crc kubenswrapper[5113]: I0121 09:31:45.639118 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-jzwdf" Jan 21 09:31:49 crc kubenswrapper[5113]: I0121 09:31:49.692423 5113 generic.go:358] "Generic (PLEG): container finished" podID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerID="bfeb72c42668b3fa063cc58fc9b3875712b84a1d7e3c2750e67be9c2c81e11f8" exitCode=0 Jan 21 09:31:49 crc kubenswrapper[5113]: I0121 09:31:49.692475 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerDied","Data":"bfeb72c42668b3fa063cc58fc9b3875712b84a1d7e3c2750e67be9c2c81e11f8"} Jan 21 09:31:50 crc kubenswrapper[5113]: I0121 09:31:50.705183 5113 generic.go:358] "Generic (PLEG): container finished" podID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerID="afa86fdd2c72826b100a33f9008314ebd7a368ccb9bb1c4d1d6441af6bd3454e" exitCode=0 Jan 21 09:31:50 crc kubenswrapper[5113]: I0121 09:31:50.705250 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerDied","Data":"afa86fdd2c72826b100a33f9008314ebd7a368ccb9bb1c4d1d6441af6bd3454e"} Jan 21 09:31:50 crc kubenswrapper[5113]: I0121 09:31:50.754129 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d/manage-dockerfile/0.log" Jan 21 09:31:51 crc kubenswrapper[5113]: I0121 09:31:51.714828 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerStarted","Data":"148394de997b21c16ff2d46b10ecf4bde37f28f15008ff1ce68bcb6800d6180c"} Jan 21 09:31:51 crc kubenswrapper[5113]: I0121 09:31:51.746366 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=14.746346727 podStartE2EDuration="14.746346727s" podCreationTimestamp="2026-01-21 09:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:31:51.744654098 +0000 UTC m=+841.245481207" watchObservedRunningTime="2026-01-21 09:31:51.746346727 +0000 UTC m=+841.247173776" Jan 21 09:32:00 crc kubenswrapper[5113]: I0121 09:32:00.150952 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483132-2hccf"] Jan 21 09:32:00 crc kubenswrapper[5113]: I0121 09:32:00.152779 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48f10d3c-0d58-4df6-811a-b82a47c770a4" containerName="manage-dockerfile" Jan 21 09:32:00 crc kubenswrapper[5113]: I0121 09:32:00.152802 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f10d3c-0d58-4df6-811a-b82a47c770a4" containerName="manage-dockerfile" Jan 21 09:32:00 crc kubenswrapper[5113]: I0121 09:32:00.153014 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="48f10d3c-0d58-4df6-811a-b82a47c770a4" containerName="manage-dockerfile" Jan 21 09:32:01 crc kubenswrapper[5113]: I0121 09:32:01.985213 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:01 crc kubenswrapper[5113]: I0121 09:32:01.990226 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:32:01 crc kubenswrapper[5113]: I0121 09:32:01.994527 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:32:01 crc kubenswrapper[5113]: I0121 09:32:01.997334 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.001854 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483132-2hccf"] Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.048526 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86b4\" (UniqueName: \"kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4\") pod \"auto-csr-approver-29483132-2hccf\" (UID: \"b16d228c-645d-47fe-8089-aa8dff35fbd6\") " pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.149721 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w86b4\" (UniqueName: \"kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4\") pod \"auto-csr-approver-29483132-2hccf\" (UID: \"b16d228c-645d-47fe-8089-aa8dff35fbd6\") " pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.171467 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w86b4\" (UniqueName: \"kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4\") pod \"auto-csr-approver-29483132-2hccf\" (UID: \"b16d228c-645d-47fe-8089-aa8dff35fbd6\") " pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.307246 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:02 crc kubenswrapper[5113]: I0121 09:32:02.819657 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483132-2hccf"] Jan 21 09:32:03 crc kubenswrapper[5113]: I0121 09:32:03.873103 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483132-2hccf" event={"ID":"b16d228c-645d-47fe-8089-aa8dff35fbd6","Type":"ContainerStarted","Data":"870e1ee76112835f7040c1a86b9432bde9173ef0a024dd0de90c98d190506e8f"} Jan 21 09:32:05 crc kubenswrapper[5113]: I0121 09:32:05.887781 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483132-2hccf" event={"ID":"b16d228c-645d-47fe-8089-aa8dff35fbd6","Type":"ContainerStarted","Data":"4d5dd1b4025e32c11975bc76f00618f7425f57d1439d7acf22b29687550c12b6"} Jan 21 09:32:05 crc kubenswrapper[5113]: I0121 09:32:05.904942 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483132-2hccf" podStartSLOduration=3.209123967 podStartE2EDuration="5.904925401s" podCreationTimestamp="2026-01-21 09:32:00 +0000 UTC" firstStartedPulling="2026-01-21 09:32:02.823980447 +0000 UTC m=+852.324807496" lastFinishedPulling="2026-01-21 09:32:05.519781881 +0000 UTC m=+855.020608930" observedRunningTime="2026-01-21 09:32:05.901238945 +0000 UTC m=+855.402065994" watchObservedRunningTime="2026-01-21 09:32:05.904925401 +0000 UTC m=+855.405752450" Jan 21 09:32:06 crc kubenswrapper[5113]: I0121 09:32:06.896293 5113 generic.go:358] "Generic (PLEG): container finished" podID="b16d228c-645d-47fe-8089-aa8dff35fbd6" containerID="4d5dd1b4025e32c11975bc76f00618f7425f57d1439d7acf22b29687550c12b6" exitCode=0 Jan 21 09:32:06 crc kubenswrapper[5113]: I0121 09:32:06.896413 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483132-2hccf" event={"ID":"b16d228c-645d-47fe-8089-aa8dff35fbd6","Type":"ContainerDied","Data":"4d5dd1b4025e32c11975bc76f00618f7425f57d1439d7acf22b29687550c12b6"} Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.194306 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.244523 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86b4\" (UniqueName: \"kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4\") pod \"b16d228c-645d-47fe-8089-aa8dff35fbd6\" (UID: \"b16d228c-645d-47fe-8089-aa8dff35fbd6\") " Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.249758 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4" (OuterVolumeSpecName: "kube-api-access-w86b4") pod "b16d228c-645d-47fe-8089-aa8dff35fbd6" (UID: "b16d228c-645d-47fe-8089-aa8dff35fbd6"). InnerVolumeSpecName "kube-api-access-w86b4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.346164 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w86b4\" (UniqueName: \"kubernetes.io/projected/b16d228c-645d-47fe-8089-aa8dff35fbd6-kube-api-access-w86b4\") on node \"crc\" DevicePath \"\"" Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.921430 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483132-2hccf" event={"ID":"b16d228c-645d-47fe-8089-aa8dff35fbd6","Type":"ContainerDied","Data":"870e1ee76112835f7040c1a86b9432bde9173ef0a024dd0de90c98d190506e8f"} Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.921651 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870e1ee76112835f7040c1a86b9432bde9173ef0a024dd0de90c98d190506e8f" Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.921438 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483132-2hccf" Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.954160 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483126-2lhx7"] Jan 21 09:32:08 crc kubenswrapper[5113]: I0121 09:32:08.959227 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483126-2lhx7"] Jan 21 09:32:10 crc kubenswrapper[5113]: I0121 09:32:10.850663 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5795524d-b047-4cd0-a10c-8b945809822a" path="/var/lib/kubelet/pods/5795524d-b047-4cd0-a10c-8b945809822a/volumes" Jan 21 09:32:51 crc kubenswrapper[5113]: I0121 09:32:51.254459 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:32:51 crc kubenswrapper[5113]: I0121 09:32:51.264046 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:32:51 crc kubenswrapper[5113]: I0121 09:32:51.269662 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:32:51 crc kubenswrapper[5113]: I0121 09:32:51.272529 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:32:51 crc kubenswrapper[5113]: I0121 09:32:51.639796 5113 scope.go:117] "RemoveContainer" containerID="1e3ebfc9b348005d6f99ce8a6cd1328da82abcb05888f749bac5bf99b2aea168" Jan 21 09:33:28 crc kubenswrapper[5113]: I0121 09:33:28.339537 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:33:28 crc kubenswrapper[5113]: I0121 09:33:28.340108 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:33:35 crc kubenswrapper[5113]: I0121 09:33:35.933098 5113 generic.go:358] "Generic (PLEG): container finished" podID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerID="148394de997b21c16ff2d46b10ecf4bde37f28f15008ff1ce68bcb6800d6180c" exitCode=0 Jan 21 09:33:35 crc kubenswrapper[5113]: I0121 09:33:35.934251 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerDied","Data":"148394de997b21c16ff2d46b10ecf4bde37f28f15008ff1ce68bcb6800d6180c"} Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.289007 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346124 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346204 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346264 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346335 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346373 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346420 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mpl\" (UniqueName: \"kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346495 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346570 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346619 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346651 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346784 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.346830 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles\") pod \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\" (UID: \"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d\") " Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.347473 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.347470 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.347558 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.349288 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.349573 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.349914 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.358085 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.358375 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.358642 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl" (OuterVolumeSpecName: "kube-api-access-w8mpl") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "kube-api-access-w8mpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.418946 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448610 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448650 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448663 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448673 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448683 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448696 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448706 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448716 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448727 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8mpl\" (UniqueName: \"kubernetes.io/projected/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-kube-api-access-w8mpl\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.448768 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.569314 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.651937 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.953276 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.953316 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d","Type":"ContainerDied","Data":"63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006"} Jan 21 09:33:37 crc kubenswrapper[5113]: I0121 09:33:37.954038 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d35c34037cd6ee16e6dbf6c8a496cbc01e73926797134619cde9729bbe7006" Jan 21 09:33:39 crc kubenswrapper[5113]: I0121 09:33:39.823666 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" (UID: "f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:39 crc kubenswrapper[5113]: I0121 09:33:39.903127 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.244328 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246868 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="manage-dockerfile" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246902 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="manage-dockerfile" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246919 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="docker-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246927 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="docker-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246945 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="git-clone" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246954 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="git-clone" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246964 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b16d228c-645d-47fe-8089-aa8dff35fbd6" containerName="oc" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.246971 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b16d228c-645d-47fe-8089-aa8dff35fbd6" containerName="oc" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.247088 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d" containerName="docker-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.247104 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b16d228c-645d-47fe-8089-aa8dff35fbd6" containerName="oc" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.447970 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.448129 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.450538 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.450558 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.451415 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.451834 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536587 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536631 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536661 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536695 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536750 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqvnk\" (UniqueName: \"kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536778 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536807 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.536908 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.537009 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.537054 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.537096 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638499 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638581 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638621 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqvnk\" (UniqueName: \"kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638791 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638864 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.638970 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639068 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639209 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639269 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639327 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639546 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639626 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639791 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639796 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.639992 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.640278 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.640457 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.640941 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.641365 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.641775 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.642001 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.649354 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.649450 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.669561 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqvnk\" (UniqueName: \"kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk\") pod \"smart-gateway-operator-1-build\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:42 crc kubenswrapper[5113]: I0121 09:33:42.773974 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:43 crc kubenswrapper[5113]: I0121 09:33:43.015626 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:43 crc kubenswrapper[5113]: I0121 09:33:43.023416 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:33:44 crc kubenswrapper[5113]: I0121 09:33:44.008897 5113 generic.go:358] "Generic (PLEG): container finished" podID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerID="348d1f592b486e17dd55c7a12cc57f2d56925569c625c850e6d419003e6381d8" exitCode=0 Jan 21 09:33:44 crc kubenswrapper[5113]: I0121 09:33:44.008992 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0","Type":"ContainerDied","Data":"348d1f592b486e17dd55c7a12cc57f2d56925569c625c850e6d419003e6381d8"} Jan 21 09:33:44 crc kubenswrapper[5113]: I0121 09:33:44.009527 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0","Type":"ContainerStarted","Data":"2af6ba05b07bda090d59735fdf736d3f3426d0f65439e82af54b314c090b153c"} Jan 21 09:33:45 crc kubenswrapper[5113]: I0121 09:33:45.027126 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0","Type":"ContainerStarted","Data":"aef242e750f1c00d3888dacc22501c956ddf32c47ef394f859888f58a05c4c19"} Jan 21 09:33:52 crc kubenswrapper[5113]: I0121 09:33:52.951610 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=10.951590233 podStartE2EDuration="10.951590233s" podCreationTimestamp="2026-01-21 09:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:33:45.062968114 +0000 UTC m=+954.563795173" watchObservedRunningTime="2026-01-21 09:33:52.951590233 +0000 UTC m=+962.452417282" Jan 21 09:33:52 crc kubenswrapper[5113]: I0121 09:33:52.961272 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:52 crc kubenswrapper[5113]: I0121 09:33:52.961708 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="docker-build" containerID="cri-o://aef242e750f1c00d3888dacc22501c956ddf32c47ef394f859888f58a05c4c19" gracePeriod=30 Jan 21 09:33:54 crc kubenswrapper[5113]: I0121 09:33:54.594455 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.525921 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.528383 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.528780 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.529759 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.541956 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683009 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683182 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qpl\" (UniqueName: \"kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683261 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683356 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683470 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683515 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683592 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683628 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683697 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.683936 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.684392 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.684440 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786216 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786264 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786286 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786519 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76qpl\" (UniqueName: \"kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786549 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786617 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786648 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786667 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786701 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786734 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786829 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.786925 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.787270 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.787289 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.787346 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.787860 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.788047 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.788253 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.788366 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.788603 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.793611 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.794718 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.804456 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76qpl\" (UniqueName: \"kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl\") pod \"smart-gateway-operator-2-build\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.846066 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.848615 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0/docker-build/0.log" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.849198 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.990403 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991060 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqvnk\" (UniqueName: \"kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991125 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991174 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991231 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991230 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991261 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991293 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991380 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991421 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991469 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991528 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991704 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991778 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull\") pod \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\" (UID: \"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0\") " Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991840 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.991893 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.992242 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.992258 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.992271 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.992282 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.992406 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.993310 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.993545 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.993843 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.998108 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.998147 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:33:57 crc kubenswrapper[5113]: I0121 09:33:57.998156 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk" (OuterVolumeSpecName: "kube-api-access-jqvnk") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "kube-api-access-jqvnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.040117 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 21 09:33:58 crc kubenswrapper[5113]: W0121 09:33:58.041803 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c9d3e6_d659_4bc9_95a9_e5325d8fd568.slice/crio-51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef WatchSource:0}: Error finding container 51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef: Status 404 returned error can't find the container with id 51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094748 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094780 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094789 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094798 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094807 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094815 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jqvnk\" (UniqueName: \"kubernetes.io/projected/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-kube-api-access-jqvnk\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.094823 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.141767 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" (UID: "e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.196296 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.237884 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0/docker-build/0.log" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.238375 5113 generic.go:358] "Generic (PLEG): container finished" podID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerID="aef242e750f1c00d3888dacc22501c956ddf32c47ef394f859888f58a05c4c19" exitCode=1 Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.238481 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0","Type":"ContainerDied","Data":"aef242e750f1c00d3888dacc22501c956ddf32c47ef394f859888f58a05c4c19"} Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.238523 5113 scope.go:117] "RemoveContainer" containerID="aef242e750f1c00d3888dacc22501c956ddf32c47ef394f859888f58a05c4c19" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.294756 5113 scope.go:117] "RemoveContainer" containerID="348d1f592b486e17dd55c7a12cc57f2d56925569c625c850e6d419003e6381d8" Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.339899 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:33:58 crc kubenswrapper[5113]: I0121 09:33:58.339967 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.246670 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0","Type":"ContainerDied","Data":"2af6ba05b07bda090d59735fdf736d3f3426d0f65439e82af54b314c090b153c"} Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.246682 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.248105 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerStarted","Data":"93e9b25ec262258fd7129beaffcff087a0959815edbdb5f0c7cb8e1e8824f552"} Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.248127 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerStarted","Data":"51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef"} Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.266419 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:59 crc kubenswrapper[5113]: I0121 09:33:59.270971 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 21 09:33:59 crc kubenswrapper[5113]: E0121 09:33:59.387198 5113 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.181:53968->38.102.83.181:34439: write tcp 38.102.83.181:53968->38.102.83.181:34439: write: broken pipe Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.142358 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483134-qxqjm"] Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.143768 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="manage-dockerfile" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.143787 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="manage-dockerfile" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.143822 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="docker-build" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.143830 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="docker-build" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.143976 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" containerName="docker-build" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.147813 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.149608 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.150370 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483134-qxqjm"] Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.150445 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.151534 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.259711 5113 generic.go:358] "Generic (PLEG): container finished" podID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerID="93e9b25ec262258fd7129beaffcff087a0959815edbdb5f0c7cb8e1e8824f552" exitCode=0 Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.259859 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerDied","Data":"93e9b25ec262258fd7129beaffcff087a0959815edbdb5f0c7cb8e1e8824f552"} Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.328331 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlthn\" (UniqueName: \"kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn\") pod \"auto-csr-approver-29483134-qxqjm\" (UID: \"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee\") " pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.431782 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vlthn\" (UniqueName: \"kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn\") pod \"auto-csr-approver-29483134-qxqjm\" (UID: \"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee\") " pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.461598 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlthn\" (UniqueName: \"kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn\") pod \"auto-csr-approver-29483134-qxqjm\" (UID: \"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee\") " pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.497985 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.765194 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483134-qxqjm"] Jan 21 09:34:00 crc kubenswrapper[5113]: W0121 09:34:00.782729 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod337657a4_5e17_4e6a_9174_6b7e1ecdb9ee.slice/crio-dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1 WatchSource:0}: Error finding container dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1: Status 404 returned error can't find the container with id dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1 Jan 21 09:34:00 crc kubenswrapper[5113]: I0121 09:34:00.855355 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0" path="/var/lib/kubelet/pods/e5bcb6f5-2b5b-482c-aadf-2e9acdc00ff0/volumes" Jan 21 09:34:01 crc kubenswrapper[5113]: I0121 09:34:01.271269 5113 generic.go:358] "Generic (PLEG): container finished" podID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerID="66e46d6368f7f711c5d3ffe4fa20723c551ef1df067af8522a5db8cc5bcd9661" exitCode=0 Jan 21 09:34:01 crc kubenswrapper[5113]: I0121 09:34:01.271335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerDied","Data":"66e46d6368f7f711c5d3ffe4fa20723c551ef1df067af8522a5db8cc5bcd9661"} Jan 21 09:34:01 crc kubenswrapper[5113]: I0121 09:34:01.276311 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" event={"ID":"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee","Type":"ContainerStarted","Data":"dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1"} Jan 21 09:34:01 crc kubenswrapper[5113]: I0121 09:34:01.305625 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_b2c9d3e6-d659-4bc9-95a9-e5325d8fd568/manage-dockerfile/0.log" Jan 21 09:34:02 crc kubenswrapper[5113]: I0121 09:34:02.303943 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerStarted","Data":"d501cc293d7238f0db42db26f07474a665aae250ada27d982a9fb91e05d0e383"} Jan 21 09:34:02 crc kubenswrapper[5113]: I0121 09:34:02.307590 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" event={"ID":"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee","Type":"ContainerStarted","Data":"70b2617a4ff231b4a20fa3850daa4bdadbee6c5630f828eae44ac52e1ac8e587"} Jan 21 09:34:02 crc kubenswrapper[5113]: I0121 09:34:02.337415 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=8.337397718 podStartE2EDuration="8.337397718s" podCreationTimestamp="2026-01-21 09:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:34:02.334188465 +0000 UTC m=+971.835015534" watchObservedRunningTime="2026-01-21 09:34:02.337397718 +0000 UTC m=+971.838224767" Jan 21 09:34:02 crc kubenswrapper[5113]: I0121 09:34:02.357096 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" podStartSLOduration=1.290686984 podStartE2EDuration="2.357071207s" podCreationTimestamp="2026-01-21 09:34:00 +0000 UTC" firstStartedPulling="2026-01-21 09:34:00.784697156 +0000 UTC m=+970.285524215" lastFinishedPulling="2026-01-21 09:34:01.851081349 +0000 UTC m=+971.351908438" observedRunningTime="2026-01-21 09:34:02.356921023 +0000 UTC m=+971.857748072" watchObservedRunningTime="2026-01-21 09:34:02.357071207 +0000 UTC m=+971.857898276" Jan 21 09:34:03 crc kubenswrapper[5113]: I0121 09:34:03.314536 5113 generic.go:358] "Generic (PLEG): container finished" podID="337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" containerID="70b2617a4ff231b4a20fa3850daa4bdadbee6c5630f828eae44ac52e1ac8e587" exitCode=0 Jan 21 09:34:03 crc kubenswrapper[5113]: I0121 09:34:03.314663 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" event={"ID":"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee","Type":"ContainerDied","Data":"70b2617a4ff231b4a20fa3850daa4bdadbee6c5630f828eae44ac52e1ac8e587"} Jan 21 09:34:04 crc kubenswrapper[5113]: I0121 09:34:04.590810 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:04 crc kubenswrapper[5113]: I0121 09:34:04.696017 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlthn\" (UniqueName: \"kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn\") pod \"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee\" (UID: \"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee\") " Jan 21 09:34:04 crc kubenswrapper[5113]: I0121 09:34:04.708217 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn" (OuterVolumeSpecName: "kube-api-access-vlthn") pod "337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" (UID: "337657a4-5e17-4e6a-9174-6b7e1ecdb9ee"). InnerVolumeSpecName "kube-api-access-vlthn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:34:04 crc kubenswrapper[5113]: I0121 09:34:04.797453 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vlthn\" (UniqueName: \"kubernetes.io/projected/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee-kube-api-access-vlthn\") on node \"crc\" DevicePath \"\"" Jan 21 09:34:05 crc kubenswrapper[5113]: I0121 09:34:05.330090 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" event={"ID":"337657a4-5e17-4e6a-9174-6b7e1ecdb9ee","Type":"ContainerDied","Data":"dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1"} Jan 21 09:34:05 crc kubenswrapper[5113]: I0121 09:34:05.330150 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd7e0a5c7c8d1d642c250dbcd465571577afa85a5e1b9fb82d88e3e1651c9bb1" Jan 21 09:34:05 crc kubenswrapper[5113]: I0121 09:34:05.330156 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483134-qxqjm" Jan 21 09:34:05 crc kubenswrapper[5113]: I0121 09:34:05.652642 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483128-8rjfv"] Jan 21 09:34:05 crc kubenswrapper[5113]: I0121 09:34:05.662792 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483128-8rjfv"] Jan 21 09:34:06 crc kubenswrapper[5113]: I0121 09:34:06.864454 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8510340e-32f0-4f11-82c2-d57eed3356be" path="/var/lib/kubelet/pods/8510340e-32f0-4f11-82c2-d57eed3356be/volumes" Jan 21 09:34:28 crc kubenswrapper[5113]: I0121 09:34:28.339991 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:34:28 crc kubenswrapper[5113]: I0121 09:34:28.341190 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:34:28 crc kubenswrapper[5113]: I0121 09:34:28.341284 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:34:28 crc kubenswrapper[5113]: I0121 09:34:28.342528 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:34:28 crc kubenswrapper[5113]: I0121 09:34:28.342646 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c" gracePeriod=600 Jan 21 09:34:29 crc kubenswrapper[5113]: I0121 09:34:29.553557 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c" exitCode=0 Jan 21 09:34:29 crc kubenswrapper[5113]: I0121 09:34:29.553659 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c"} Jan 21 09:34:29 crc kubenswrapper[5113]: I0121 09:34:29.554248 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c"} Jan 21 09:34:29 crc kubenswrapper[5113]: I0121 09:34:29.554278 5113 scope.go:117] "RemoveContainer" containerID="313e78d1e84417b1cc72485f1361b34ce94e49f3a7ae332408769377ab7be1a0" Jan 21 09:34:51 crc kubenswrapper[5113]: I0121 09:34:51.786137 5113 scope.go:117] "RemoveContainer" containerID="f3bf36e9ef03df536d13938b86760024f75ebfd10aced4b4e29f9df8a9970b7d" Jan 21 09:35:12 crc kubenswrapper[5113]: I0121 09:35:12.108357 5113 generic.go:358] "Generic (PLEG): container finished" podID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerID="d501cc293d7238f0db42db26f07474a665aae250ada27d982a9fb91e05d0e383" exitCode=0 Jan 21 09:35:12 crc kubenswrapper[5113]: I0121 09:35:12.108409 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerDied","Data":"d501cc293d7238f0db42db26f07474a665aae250ada27d982a9fb91e05d0e383"} Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.466592 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560129 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560175 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560220 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560234 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560424 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560507 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76qpl\" (UniqueName: \"kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560578 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560610 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560666 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560829 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560806 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560915 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560957 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles\") pod \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\" (UID: \"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568\") " Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.560997 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561143 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561652 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561698 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561729 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561807 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.561865 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.562719 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.565346 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.569909 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl" (OuterVolumeSpecName: "kube-api-access-76qpl") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "kube-api-access-76qpl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.569966 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.570149 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663058 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663093 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663103 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663114 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663122 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-76qpl\" (UniqueName: \"kubernetes.io/projected/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-kube-api-access-76qpl\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663130 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.663139 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.812333 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:13 crc kubenswrapper[5113]: I0121 09:35:13.865930 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:14 crc kubenswrapper[5113]: I0121 09:35:14.125016 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"b2c9d3e6-d659-4bc9-95a9-e5325d8fd568","Type":"ContainerDied","Data":"51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef"} Jan 21 09:35:14 crc kubenswrapper[5113]: I0121 09:35:14.125314 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d668c1e42ee6ffb87615ed332ac5083392d71f9997b3299e86937bbed62aef" Jan 21 09:35:14 crc kubenswrapper[5113]: I0121 09:35:14.125055 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 21 09:35:15 crc kubenswrapper[5113]: I0121 09:35:15.756117 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" (UID: "b2c9d3e6-d659-4bc9-95a9-e5325d8fd568"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:15 crc kubenswrapper[5113]: I0121 09:35:15.803158 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b2c9d3e6-d659-4bc9-95a9-e5325d8fd568-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.314654 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315626 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="docker-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315659 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="docker-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315722 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="manage-dockerfile" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315755 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="manage-dockerfile" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315773 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="git-clone" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315786 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="git-clone" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315800 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" containerName="oc" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315811 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" containerName="oc" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.315966 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" containerName="oc" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.316002 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b2c9d3e6-d659-4bc9-95a9-e5325d8fd568" containerName="docker-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.912852 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.915087 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.915598 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.915711 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.916893 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.920461 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949024 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949126 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949174 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949263 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s79gp\" (UniqueName: \"kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949325 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949417 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949519 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949559 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949623 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949789 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949842 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:18 crc kubenswrapper[5113]: I0121 09:35:18.949896 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051171 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051211 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051236 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051756 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051888 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.051964 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052119 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052226 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052311 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052456 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052457 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052498 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052581 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s79gp\" (UniqueName: \"kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052629 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052661 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.052796 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.053003 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.053059 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.053258 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.053390 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.056647 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.056928 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.069932 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s79gp\" (UniqueName: \"kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp\") pod \"sg-core-1-build\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.229426 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 09:35:19 crc kubenswrapper[5113]: I0121 09:35:19.678072 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:20 crc kubenswrapper[5113]: I0121 09:35:20.348267 5113 generic.go:358] "Generic (PLEG): container finished" podID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerID="557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301" exitCode=0 Jan 21 09:35:20 crc kubenswrapper[5113]: I0121 09:35:20.348400 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"70e831f4-4654-4b4c-979d-f0e8a1e6157c","Type":"ContainerDied","Data":"557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301"} Jan 21 09:35:20 crc kubenswrapper[5113]: I0121 09:35:20.348671 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"70e831f4-4654-4b4c-979d-f0e8a1e6157c","Type":"ContainerStarted","Data":"103261d524397f69078010b589d04569b444056b5ed60d496e10c06cb2d20d87"} Jan 21 09:35:21 crc kubenswrapper[5113]: I0121 09:35:21.366186 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"70e831f4-4654-4b4c-979d-f0e8a1e6157c","Type":"ContainerStarted","Data":"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee"} Jan 21 09:35:21 crc kubenswrapper[5113]: I0121 09:35:21.400130 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.400105042 podStartE2EDuration="3.400105042s" podCreationTimestamp="2026-01-21 09:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:35:21.399213546 +0000 UTC m=+1050.900040605" watchObservedRunningTime="2026-01-21 09:35:21.400105042 +0000 UTC m=+1050.900932101" Jan 21 09:35:28 crc kubenswrapper[5113]: I0121 09:35:28.725200 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:28 crc kubenswrapper[5113]: I0121 09:35:28.726125 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="docker-build" containerID="cri-o://ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee" gracePeriod=30 Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.137204 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_70e831f4-4654-4b4c-979d-f0e8a1e6157c/docker-build/0.log" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.138209 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211602 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s79gp\" (UniqueName: \"kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211725 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211817 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211886 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211901 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.211942 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212190 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212244 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212358 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212434 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212611 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache\") pod \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\" (UID: \"70e831f4-4654-4b4c-979d-f0e8a1e6157c\") " Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.212933 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.213101 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.213186 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.213336 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.213353 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.214071 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216251 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216282 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216296 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216310 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216322 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216333 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/70e831f4-4654-4b4c-979d-f0e8a1e6157c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.216346 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.228996 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.229030 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp" (OuterVolumeSpecName: "kube-api-access-s79gp") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "kube-api-access-s79gp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.229135 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.315811 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.317649 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.317682 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.317693 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s79gp\" (UniqueName: \"kubernetes.io/projected/70e831f4-4654-4b4c-979d-f0e8a1e6157c-kube-api-access-s79gp\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.317701 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/70e831f4-4654-4b4c-979d-f0e8a1e6157c-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.342476 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "70e831f4-4654-4b4c-979d-f0e8a1e6157c" (UID: "70e831f4-4654-4b4c-979d-f0e8a1e6157c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.419075 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/70e831f4-4654-4b4c-979d-f0e8a1e6157c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.425520 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_70e831f4-4654-4b4c-979d-f0e8a1e6157c/docker-build/0.log" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.426252 5113 generic.go:358] "Generic (PLEG): container finished" podID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerID="ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee" exitCode=1 Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.426338 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.426355 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"70e831f4-4654-4b4c-979d-f0e8a1e6157c","Type":"ContainerDied","Data":"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee"} Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.426426 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"70e831f4-4654-4b4c-979d-f0e8a1e6157c","Type":"ContainerDied","Data":"103261d524397f69078010b589d04569b444056b5ed60d496e10c06cb2d20d87"} Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.426477 5113 scope.go:117] "RemoveContainer" containerID="ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.462160 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.473010 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.489043 5113 scope.go:117] "RemoveContainer" containerID="557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.585316 5113 scope.go:117] "RemoveContainer" containerID="ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee" Jan 21 09:35:29 crc kubenswrapper[5113]: E0121 09:35:29.585930 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee\": container with ID starting with ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee not found: ID does not exist" containerID="ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.585969 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee"} err="failed to get container status \"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee\": rpc error: code = NotFound desc = could not find container \"ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee\": container with ID starting with ee44cd5ce6f9340aaec86002d720ce986f82d519e80fcf2dbd725f4d13aae0ee not found: ID does not exist" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.585991 5113 scope.go:117] "RemoveContainer" containerID="557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301" Jan 21 09:35:29 crc kubenswrapper[5113]: E0121 09:35:29.586289 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301\": container with ID starting with 557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301 not found: ID does not exist" containerID="557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301" Jan 21 09:35:29 crc kubenswrapper[5113]: I0121 09:35:29.586315 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301"} err="failed to get container status \"557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301\": rpc error: code = NotFound desc = could not find container \"557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301\": container with ID starting with 557536d2eb563007ec09d97c5cfb4cdd885a96c9ad7b06fcb41292c8a1560301 not found: ID does not exist" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.384966 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.386024 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="docker-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.386041 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="docker-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.386061 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="manage-dockerfile" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.386070 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="manage-dockerfile" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.386224 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" containerName="docker-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.719781 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.719984 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.723907 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.724189 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.724211 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.725008 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.838864 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.838929 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.838958 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839004 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839072 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839092 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhhqh\" (UniqueName: \"kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839115 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839141 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839166 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839226 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839249 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.839297 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.858729 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70e831f4-4654-4b4c-979d-f0e8a1e6157c" path="/var/lib/kubelet/pods/70e831f4-4654-4b4c-979d-f0e8a1e6157c/volumes" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.940821 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.940882 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941068 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zhhqh\" (UniqueName: \"kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941579 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941830 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941853 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941904 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.941933 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942069 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942342 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942425 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942466 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942590 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942602 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942659 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942698 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.942798 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.943017 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.943037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.943117 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.943237 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.948832 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.950669 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:30 crc kubenswrapper[5113]: I0121 09:35:30.959689 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhhqh\" (UniqueName: \"kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh\") pod \"sg-core-2-build\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " pod="service-telemetry/sg-core-2-build" Jan 21 09:35:31 crc kubenswrapper[5113]: I0121 09:35:31.089475 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 09:35:31 crc kubenswrapper[5113]: I0121 09:35:31.362697 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 21 09:35:31 crc kubenswrapper[5113]: I0121 09:35:31.451928 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerStarted","Data":"af6a46c4a55105fdd3b60700d3acf5aee027049d788ed3e70cbc86de78470572"} Jan 21 09:35:32 crc kubenswrapper[5113]: I0121 09:35:32.463494 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerStarted","Data":"e8aaa6fd668d4f94935a94218ad570c73f0f8af0f39280b769df52e5cf5e397b"} Jan 21 09:35:33 crc kubenswrapper[5113]: I0121 09:35:33.472996 5113 generic.go:358] "Generic (PLEG): container finished" podID="03769401-1020-4b9e-9638-36fc2c68bb59" containerID="e8aaa6fd668d4f94935a94218ad570c73f0f8af0f39280b769df52e5cf5e397b" exitCode=0 Jan 21 09:35:33 crc kubenswrapper[5113]: I0121 09:35:33.473067 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerDied","Data":"e8aaa6fd668d4f94935a94218ad570c73f0f8af0f39280b769df52e5cf5e397b"} Jan 21 09:35:34 crc kubenswrapper[5113]: I0121 09:35:34.495752 5113 generic.go:358] "Generic (PLEG): container finished" podID="03769401-1020-4b9e-9638-36fc2c68bb59" containerID="35b07d5f6ebbddc4a5a5dd259022b9e522d1ef3cf7b7276bceb9d4d8b5790f15" exitCode=0 Jan 21 09:35:34 crc kubenswrapper[5113]: I0121 09:35:34.495800 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerDied","Data":"35b07d5f6ebbddc4a5a5dd259022b9e522d1ef3cf7b7276bceb9d4d8b5790f15"} Jan 21 09:35:34 crc kubenswrapper[5113]: I0121 09:35:34.534752 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_03769401-1020-4b9e-9638-36fc2c68bb59/manage-dockerfile/0.log" Jan 21 09:35:35 crc kubenswrapper[5113]: I0121 09:35:35.506831 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerStarted","Data":"0861b93a7856b4182e2b33a01d5516f733edbd6991175bde7e3c8de552bcc915"} Jan 21 09:35:35 crc kubenswrapper[5113]: I0121 09:35:35.555165 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.555123 podStartE2EDuration="5.555123s" podCreationTimestamp="2026-01-21 09:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:35:35.532203323 +0000 UTC m=+1065.033030422" watchObservedRunningTime="2026-01-21 09:35:35.555123 +0000 UTC m=+1065.055950209" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.144684 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483136-wcps4"] Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.162217 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.167165 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.167520 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.184910 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.187114 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483136-wcps4"] Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.219259 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rqs8\" (UniqueName: \"kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8\") pod \"auto-csr-approver-29483136-wcps4\" (UID: \"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c\") " pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.320373 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4rqs8\" (UniqueName: \"kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8\") pod \"auto-csr-approver-29483136-wcps4\" (UID: \"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c\") " pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.346931 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rqs8\" (UniqueName: \"kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8\") pod \"auto-csr-approver-29483136-wcps4\" (UID: \"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c\") " pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.519209 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:00 crc kubenswrapper[5113]: I0121 09:36:00.761715 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483136-wcps4"] Jan 21 09:36:01 crc kubenswrapper[5113]: I0121 09:36:01.714513 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483136-wcps4" event={"ID":"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c","Type":"ContainerStarted","Data":"9f7c839c8e02bb5a4951dee8b02b6b61c99bdfa6d15b4c7c77d1e5bd53d30f98"} Jan 21 09:36:03 crc kubenswrapper[5113]: I0121 09:36:03.731161 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483136-wcps4" event={"ID":"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c","Type":"ContainerStarted","Data":"88183488f1c50d70a02d6077602af648e12477a7713aa8aba23a12016987aed0"} Jan 21 09:36:03 crc kubenswrapper[5113]: I0121 09:36:03.757659 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483136-wcps4" podStartSLOduration=2.65091362 podStartE2EDuration="3.757633231s" podCreationTimestamp="2026-01-21 09:36:00 +0000 UTC" firstStartedPulling="2026-01-21 09:36:00.770285154 +0000 UTC m=+1090.271112213" lastFinishedPulling="2026-01-21 09:36:01.877004775 +0000 UTC m=+1091.377831824" observedRunningTime="2026-01-21 09:36:03.750845666 +0000 UTC m=+1093.251672755" watchObservedRunningTime="2026-01-21 09:36:03.757633231 +0000 UTC m=+1093.258460320" Jan 21 09:36:04 crc kubenswrapper[5113]: I0121 09:36:04.740448 5113 generic.go:358] "Generic (PLEG): container finished" podID="78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" containerID="88183488f1c50d70a02d6077602af648e12477a7713aa8aba23a12016987aed0" exitCode=0 Jan 21 09:36:04 crc kubenswrapper[5113]: I0121 09:36:04.740583 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483136-wcps4" event={"ID":"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c","Type":"ContainerDied","Data":"88183488f1c50d70a02d6077602af648e12477a7713aa8aba23a12016987aed0"} Jan 21 09:36:05 crc kubenswrapper[5113]: I0121 09:36:05.985540 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.122266 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rqs8\" (UniqueName: \"kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8\") pod \"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c\" (UID: \"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c\") " Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.134018 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8" (OuterVolumeSpecName: "kube-api-access-4rqs8") pod "78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" (UID: "78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c"). InnerVolumeSpecName "kube-api-access-4rqs8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.224399 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4rqs8\" (UniqueName: \"kubernetes.io/projected/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c-kube-api-access-4rqs8\") on node \"crc\" DevicePath \"\"" Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.755984 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483136-wcps4" event={"ID":"78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c","Type":"ContainerDied","Data":"9f7c839c8e02bb5a4951dee8b02b6b61c99bdfa6d15b4c7c77d1e5bd53d30f98"} Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.756037 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f7c839c8e02bb5a4951dee8b02b6b61c99bdfa6d15b4c7c77d1e5bd53d30f98" Jan 21 09:36:06 crc kubenswrapper[5113]: I0121 09:36:06.756042 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483136-wcps4" Jan 21 09:36:07 crc kubenswrapper[5113]: I0121 09:36:07.050308 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483130-7wqzp"] Jan 21 09:36:07 crc kubenswrapper[5113]: I0121 09:36:07.057426 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483130-7wqzp"] Jan 21 09:36:08 crc kubenswrapper[5113]: I0121 09:36:08.854802 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08901054-a8e8-48af-b623-594a806778e6" path="/var/lib/kubelet/pods/08901054-a8e8-48af-b623-594a806778e6/volumes" Jan 21 09:36:28 crc kubenswrapper[5113]: I0121 09:36:28.340090 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:36:28 crc kubenswrapper[5113]: I0121 09:36:28.341876 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:36:51 crc kubenswrapper[5113]: I0121 09:36:51.916974 5113 scope.go:117] "RemoveContainer" containerID="ff3e4286ade44b5c247cee413574dfb839eb51ac8704495a4c14afc2a7415930" Jan 21 09:36:58 crc kubenswrapper[5113]: I0121 09:36:58.340090 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:36:58 crc kubenswrapper[5113]: I0121 09:36:58.342453 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:37:28 crc kubenswrapper[5113]: I0121 09:37:28.341253 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:37:28 crc kubenswrapper[5113]: I0121 09:37:28.341984 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:37:28 crc kubenswrapper[5113]: I0121 09:37:28.342060 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:37:28 crc kubenswrapper[5113]: I0121 09:37:28.342974 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:37:28 crc kubenswrapper[5113]: I0121 09:37:28.343077 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c" gracePeriod=600 Jan 21 09:37:29 crc kubenswrapper[5113]: I0121 09:37:29.466295 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c" exitCode=0 Jan 21 09:37:29 crc kubenswrapper[5113]: I0121 09:37:29.466361 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c"} Jan 21 09:37:29 crc kubenswrapper[5113]: I0121 09:37:29.467030 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756"} Jan 21 09:37:29 crc kubenswrapper[5113]: I0121 09:37:29.467049 5113 scope.go:117] "RemoveContainer" containerID="33e175e75c8a0f70e28412b6b026a9f0b5987cfe2dfc69fc2d2d0b83fb73ab1c" Jan 21 09:37:51 crc kubenswrapper[5113]: I0121 09:37:51.371781 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:37:51 crc kubenswrapper[5113]: I0121 09:37:51.392307 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:37:51 crc kubenswrapper[5113]: I0121 09:37:51.394504 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:37:51 crc kubenswrapper[5113]: I0121 09:37:51.405980 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.149055 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483138-lm7vc"] Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.150344 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" containerName="oc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.150361 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" containerName="oc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.150516 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" containerName="oc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.154361 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.157976 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.158888 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.159528 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483138-lm7vc"] Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.161958 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.248635 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhpdl\" (UniqueName: \"kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl\") pod \"auto-csr-approver-29483138-lm7vc\" (UID: \"05b70962-5d67-4607-bcb9-9fa274469d26\") " pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.352102 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lhpdl\" (UniqueName: \"kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl\") pod \"auto-csr-approver-29483138-lm7vc\" (UID: \"05b70962-5d67-4607-bcb9-9fa274469d26\") " pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.395122 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhpdl\" (UniqueName: \"kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl\") pod \"auto-csr-approver-29483138-lm7vc\" (UID: \"05b70962-5d67-4607-bcb9-9fa274469d26\") " pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.486263 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:00 crc kubenswrapper[5113]: I0121 09:38:00.798368 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483138-lm7vc"] Jan 21 09:38:01 crc kubenswrapper[5113]: I0121 09:38:01.709652 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" event={"ID":"05b70962-5d67-4607-bcb9-9fa274469d26","Type":"ContainerStarted","Data":"0df3e9de651803120feb9f8e7f57a8b5852422e68ade35162eb29902890f86b7"} Jan 21 09:38:02 crc kubenswrapper[5113]: I0121 09:38:02.719448 5113 generic.go:358] "Generic (PLEG): container finished" podID="05b70962-5d67-4607-bcb9-9fa274469d26" containerID="63bb2655f5b899c4a4115f7e88337134cd2eea000a900526308dbd83aad7fcca" exitCode=0 Jan 21 09:38:02 crc kubenswrapper[5113]: I0121 09:38:02.719562 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" event={"ID":"05b70962-5d67-4607-bcb9-9fa274469d26","Type":"ContainerDied","Data":"63bb2655f5b899c4a4115f7e88337134cd2eea000a900526308dbd83aad7fcca"} Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.027745 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.113040 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhpdl\" (UniqueName: \"kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl\") pod \"05b70962-5d67-4607-bcb9-9fa274469d26\" (UID: \"05b70962-5d67-4607-bcb9-9fa274469d26\") " Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.122119 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl" (OuterVolumeSpecName: "kube-api-access-lhpdl") pod "05b70962-5d67-4607-bcb9-9fa274469d26" (UID: "05b70962-5d67-4607-bcb9-9fa274469d26"). InnerVolumeSpecName "kube-api-access-lhpdl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.214311 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lhpdl\" (UniqueName: \"kubernetes.io/projected/05b70962-5d67-4607-bcb9-9fa274469d26-kube-api-access-lhpdl\") on node \"crc\" DevicePath \"\"" Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.738476 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" event={"ID":"05b70962-5d67-4607-bcb9-9fa274469d26","Type":"ContainerDied","Data":"0df3e9de651803120feb9f8e7f57a8b5852422e68ade35162eb29902890f86b7"} Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.738925 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df3e9de651803120feb9f8e7f57a8b5852422e68ade35162eb29902890f86b7" Jan 21 09:38:04 crc kubenswrapper[5113]: I0121 09:38:04.739007 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483138-lm7vc" Jan 21 09:38:05 crc kubenswrapper[5113]: I0121 09:38:05.134709 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483132-2hccf"] Jan 21 09:38:05 crc kubenswrapper[5113]: I0121 09:38:05.142199 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483132-2hccf"] Jan 21 09:38:06 crc kubenswrapper[5113]: I0121 09:38:06.857566 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b16d228c-645d-47fe-8089-aa8dff35fbd6" path="/var/lib/kubelet/pods/b16d228c-645d-47fe-8089-aa8dff35fbd6/volumes" Jan 21 09:38:52 crc kubenswrapper[5113]: I0121 09:38:52.047482 5113 scope.go:117] "RemoveContainer" containerID="4d5dd1b4025e32c11975bc76f00618f7425f57d1439d7acf22b29687550c12b6" Jan 21 09:39:05 crc kubenswrapper[5113]: I0121 09:39:05.236493 5113 generic.go:358] "Generic (PLEG): container finished" podID="03769401-1020-4b9e-9638-36fc2c68bb59" containerID="0861b93a7856b4182e2b33a01d5516f733edbd6991175bde7e3c8de552bcc915" exitCode=0 Jan 21 09:39:05 crc kubenswrapper[5113]: I0121 09:39:05.236616 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerDied","Data":"0861b93a7856b4182e2b33a01d5516f733edbd6991175bde7e3c8de552bcc915"} Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.526311 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.670672 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.670765 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.670789 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.670817 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.670851 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhhqh\" (UniqueName: \"kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.671081 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.671198 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.671300 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.671968 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672056 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672171 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672214 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672312 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672366 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run\") pod \"03769401-1020-4b9e-9638-36fc2c68bb59\" (UID: \"03769401-1020-4b9e-9638-36fc2c68bb59\") " Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672105 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.672720 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.673082 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.673116 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.673174 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.673196 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/03769401-1020-4b9e-9638-36fc2c68bb59-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.673970 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.683141 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh" (OuterVolumeSpecName: "kube-api-access-zhhqh") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "kube-api-access-zhhqh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.684981 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.689095 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.689428 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.689836 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773907 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773953 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773965 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zhhqh\" (UniqueName: \"kubernetes.io/projected/03769401-1020-4b9e-9638-36fc2c68bb59-kube-api-access-zhhqh\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773976 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03769401-1020-4b9e-9638-36fc2c68bb59-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773986 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/03769401-1020-4b9e-9638-36fc2c68bb59-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:06 crc kubenswrapper[5113]: I0121 09:39:06.773996 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:07 crc kubenswrapper[5113]: I0121 09:39:07.115158 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:07 crc kubenswrapper[5113]: I0121 09:39:07.181128 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:07 crc kubenswrapper[5113]: I0121 09:39:07.253654 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"03769401-1020-4b9e-9638-36fc2c68bb59","Type":"ContainerDied","Data":"af6a46c4a55105fdd3b60700d3acf5aee027049d788ed3e70cbc86de78470572"} Jan 21 09:39:07 crc kubenswrapper[5113]: I0121 09:39:07.253709 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af6a46c4a55105fdd3b60700d3acf5aee027049d788ed3e70cbc86de78470572" Jan 21 09:39:07 crc kubenswrapper[5113]: I0121 09:39:07.253748 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 21 09:39:09 crc kubenswrapper[5113]: I0121 09:39:09.858101 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "03769401-1020-4b9e-9638-36fc2c68bb59" (UID: "03769401-1020-4b9e-9638-36fc2c68bb59"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:09 crc kubenswrapper[5113]: I0121 09:39:09.921021 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/03769401-1020-4b9e-9638-36fc2c68bb59-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.286498 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287769 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="docker-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287812 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="docker-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287852 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05b70962-5d67-4607-bcb9-9fa274469d26" containerName="oc" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287868 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b70962-5d67-4607-bcb9-9fa274469d26" containerName="oc" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287916 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="manage-dockerfile" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.287968 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="manage-dockerfile" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.288023 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="git-clone" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.288042 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="git-clone" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.288283 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="03769401-1020-4b9e-9638-36fc2c68bb59" containerName="docker-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.288324 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="05b70962-5d67-4607-bcb9-9fa274469d26" containerName="oc" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.440276 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.440562 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.442850 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.443274 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.443566 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.445224 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550432 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550512 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550577 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550638 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550811 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550892 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.550947 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.551052 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.551195 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.551365 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.551506 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.551538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k67v\" (UniqueName: \"kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653528 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653615 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653666 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653706 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653783 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653837 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653891 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.653999 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.654038 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8k67v\" (UniqueName: \"kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.654085 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.654119 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.654150 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.654183 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655231 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655598 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655685 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655860 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655878 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.655872 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.656397 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.665271 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.665453 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.685678 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k67v\" (UniqueName: \"kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v\") pod \"sg-bridge-1-build\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:11 crc kubenswrapper[5113]: I0121 09:39:11.767245 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:12 crc kubenswrapper[5113]: I0121 09:39:12.060768 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:12 crc kubenswrapper[5113]: I0121 09:39:12.075511 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:39:12 crc kubenswrapper[5113]: I0121 09:39:12.299137 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"882a5b49-ccd8-46fa-a850-85b60f2fb2fc","Type":"ContainerStarted","Data":"66022da5fbe1b89f6330b0f0e78aeed4d196cf65a2a0cf9b3df5884e77038cfe"} Jan 21 09:39:13 crc kubenswrapper[5113]: I0121 09:39:13.310626 5113 generic.go:358] "Generic (PLEG): container finished" podID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerID="978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc" exitCode=0 Jan 21 09:39:13 crc kubenswrapper[5113]: I0121 09:39:13.311018 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"882a5b49-ccd8-46fa-a850-85b60f2fb2fc","Type":"ContainerDied","Data":"978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc"} Jan 21 09:39:14 crc kubenswrapper[5113]: I0121 09:39:14.324601 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"882a5b49-ccd8-46fa-a850-85b60f2fb2fc","Type":"ContainerStarted","Data":"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252"} Jan 21 09:39:14 crc kubenswrapper[5113]: I0121 09:39:14.349807 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.349783351 podStartE2EDuration="3.349783351s" podCreationTimestamp="2026-01-21 09:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:39:14.348089553 +0000 UTC m=+1283.848916642" watchObservedRunningTime="2026-01-21 09:39:14.349783351 +0000 UTC m=+1283.850610440" Jan 21 09:39:21 crc kubenswrapper[5113]: I0121 09:39:21.525657 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:21 crc kubenswrapper[5113]: I0121 09:39:21.527856 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="docker-build" containerID="cri-o://f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252" gracePeriod=30 Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.020171 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_882a5b49-ccd8-46fa-a850-85b60f2fb2fc/docker-build/0.log" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.021047 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107440 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107501 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107598 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k67v\" (UniqueName: \"kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107632 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107692 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107783 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107894 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107861 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107946 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.107995 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.108640 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.108021 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.108832 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.109283 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.109315 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.109352 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.109381 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root\") pod \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\" (UID: \"882a5b49-ccd8-46fa-a850-85b60f2fb2fc\") " Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.109393 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.110889 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.110976 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.111000 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.111052 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.111072 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.111155 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.111175 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.114641 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.116555 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v" (OuterVolumeSpecName: "kube-api-access-8k67v") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "kube-api-access-8k67v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.116900 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.120011 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.188610 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "882a5b49-ccd8-46fa-a850-85b60f2fb2fc" (UID: "882a5b49-ccd8-46fa-a850-85b60f2fb2fc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.212440 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.212476 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.212484 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.212494 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.212504 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8k67v\" (UniqueName: \"kubernetes.io/projected/882a5b49-ccd8-46fa-a850-85b60f2fb2fc-kube-api-access-8k67v\") on node \"crc\" DevicePath \"\"" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392331 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_882a5b49-ccd8-46fa-a850-85b60f2fb2fc/docker-build/0.log" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392775 5113 generic.go:358] "Generic (PLEG): container finished" podID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerID="f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252" exitCode=1 Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392833 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392879 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"882a5b49-ccd8-46fa-a850-85b60f2fb2fc","Type":"ContainerDied","Data":"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252"} Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392937 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"882a5b49-ccd8-46fa-a850-85b60f2fb2fc","Type":"ContainerDied","Data":"66022da5fbe1b89f6330b0f0e78aeed4d196cf65a2a0cf9b3df5884e77038cfe"} Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.392967 5113 scope.go:117] "RemoveContainer" containerID="f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.436704 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.444082 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.451500 5113 scope.go:117] "RemoveContainer" containerID="978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.546071 5113 scope.go:117] "RemoveContainer" containerID="f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252" Jan 21 09:39:22 crc kubenswrapper[5113]: E0121 09:39:22.546655 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252\": container with ID starting with f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252 not found: ID does not exist" containerID="f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.546702 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252"} err="failed to get container status \"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252\": rpc error: code = NotFound desc = could not find container \"f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252\": container with ID starting with f7db857e1644e6dbd6b0a460d0b2cde3e7491b567c380dd5af333ed423c0f252 not found: ID does not exist" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.546749 5113 scope.go:117] "RemoveContainer" containerID="978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc" Jan 21 09:39:22 crc kubenswrapper[5113]: E0121 09:39:22.547623 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc\": container with ID starting with 978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc not found: ID does not exist" containerID="978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.547715 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc"} err="failed to get container status \"978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc\": rpc error: code = NotFound desc = could not find container \"978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc\": container with ID starting with 978f545a91ebfc8bf615446c9928cf497fe64b5b285094b54cbb8ba069fcbacc not found: ID does not exist" Jan 21 09:39:22 crc kubenswrapper[5113]: I0121 09:39:22.853701 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" path="/var/lib/kubelet/pods/882a5b49-ccd8-46fa-a850-85b60f2fb2fc/volumes" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.204731 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.206093 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="docker-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.206124 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="docker-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.206155 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="manage-dockerfile" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.206167 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="manage-dockerfile" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.206401 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="882a5b49-ccd8-46fa-a850-85b60f2fb2fc" containerName="docker-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.217233 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.225272 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.225319 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.225323 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.225473 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.244610 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334535 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334594 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334637 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334673 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334688 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334834 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334891 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.334948 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.335048 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvwx\" (UniqueName: \"kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.335134 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.335280 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.335311 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.436852 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.436969 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.436978 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437118 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437168 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437215 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437266 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437362 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437407 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvwx\" (UniqueName: \"kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437532 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.437782 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.438031 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.438108 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.438114 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.438350 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.438909 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.439117 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.439120 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.439318 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.440239 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.440386 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.448831 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.452607 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.466684 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvwx\" (UniqueName: \"kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx\") pod \"sg-bridge-2-build\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.558178 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 09:39:23 crc kubenswrapper[5113]: I0121 09:39:23.842172 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 21 09:39:23 crc kubenswrapper[5113]: W0121 09:39:23.854111 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc985ccf6_8457_45c1_acdc_667628d80d5f.slice/crio-61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11 WatchSource:0}: Error finding container 61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11: Status 404 returned error can't find the container with id 61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11 Jan 21 09:39:24 crc kubenswrapper[5113]: I0121 09:39:24.421233 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerStarted","Data":"575c67cbab7d77882d719cbb128ebb598563cf29a84a96c7edf74589aa1d9368"} Jan 21 09:39:24 crc kubenswrapper[5113]: I0121 09:39:24.421288 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerStarted","Data":"61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11"} Jan 21 09:39:25 crc kubenswrapper[5113]: I0121 09:39:25.427487 5113 generic.go:358] "Generic (PLEG): container finished" podID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerID="575c67cbab7d77882d719cbb128ebb598563cf29a84a96c7edf74589aa1d9368" exitCode=0 Jan 21 09:39:25 crc kubenswrapper[5113]: I0121 09:39:25.427758 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerDied","Data":"575c67cbab7d77882d719cbb128ebb598563cf29a84a96c7edf74589aa1d9368"} Jan 21 09:39:26 crc kubenswrapper[5113]: I0121 09:39:26.438612 5113 generic.go:358] "Generic (PLEG): container finished" podID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerID="30ca987e85c6da8f27936211ca4ec394fab719f4c1f7e451be3e9fc135ff5ff3" exitCode=0 Jan 21 09:39:26 crc kubenswrapper[5113]: I0121 09:39:26.438749 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerDied","Data":"30ca987e85c6da8f27936211ca4ec394fab719f4c1f7e451be3e9fc135ff5ff3"} Jan 21 09:39:26 crc kubenswrapper[5113]: I0121 09:39:26.487897 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c985ccf6-8457-45c1-acdc-667628d80d5f/manage-dockerfile/0.log" Jan 21 09:39:27 crc kubenswrapper[5113]: I0121 09:39:27.453792 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerStarted","Data":"02e37e2bd0f68d8f48a821ab972e0da1948e9096890257e78a6eec8d8fb33867"} Jan 21 09:39:28 crc kubenswrapper[5113]: I0121 09:39:28.340361 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:39:28 crc kubenswrapper[5113]: I0121 09:39:28.340773 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:39:58 crc kubenswrapper[5113]: I0121 09:39:58.340051 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:39:58 crc kubenswrapper[5113]: I0121 09:39:58.340647 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.146608 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=37.146580247 podStartE2EDuration="37.146580247s" podCreationTimestamp="2026-01-21 09:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:39:27.492375539 +0000 UTC m=+1296.993202638" watchObservedRunningTime="2026-01-21 09:40:00.146580247 +0000 UTC m=+1329.647407326" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.152607 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483140-k6zdk"] Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.159082 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.164708 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483140-k6zdk"] Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.164929 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.165035 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.165052 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.304722 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pb8\" (UniqueName: \"kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8\") pod \"auto-csr-approver-29483140-k6zdk\" (UID: \"e4bceb22-9325-4345-aea0-c2a251183b10\") " pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.406852 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pb8\" (UniqueName: \"kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8\") pod \"auto-csr-approver-29483140-k6zdk\" (UID: \"e4bceb22-9325-4345-aea0-c2a251183b10\") " pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.440257 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pb8\" (UniqueName: \"kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8\") pod \"auto-csr-approver-29483140-k6zdk\" (UID: \"e4bceb22-9325-4345-aea0-c2a251183b10\") " pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.496513 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:00 crc kubenswrapper[5113]: I0121 09:40:00.704383 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483140-k6zdk"] Jan 21 09:40:01 crc kubenswrapper[5113]: I0121 09:40:01.713092 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" event={"ID":"e4bceb22-9325-4345-aea0-c2a251183b10","Type":"ContainerStarted","Data":"983bcb86fdd183779c1a9ed028d28d2e02b2b9cb8cfd6179ce1a034ea9e711a3"} Jan 21 09:40:03 crc kubenswrapper[5113]: I0121 09:40:03.727981 5113 generic.go:358] "Generic (PLEG): container finished" podID="e4bceb22-9325-4345-aea0-c2a251183b10" containerID="88c32a68ed2974af68a5820922528c9963e3c2e5daa1ab4f784c7b71e6c622dd" exitCode=0 Jan 21 09:40:03 crc kubenswrapper[5113]: I0121 09:40:03.728091 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" event={"ID":"e4bceb22-9325-4345-aea0-c2a251183b10","Type":"ContainerDied","Data":"88c32a68ed2974af68a5820922528c9963e3c2e5daa1ab4f784c7b71e6c622dd"} Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.029826 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.179532 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4pb8\" (UniqueName: \"kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8\") pod \"e4bceb22-9325-4345-aea0-c2a251183b10\" (UID: \"e4bceb22-9325-4345-aea0-c2a251183b10\") " Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.192560 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8" (OuterVolumeSpecName: "kube-api-access-f4pb8") pod "e4bceb22-9325-4345-aea0-c2a251183b10" (UID: "e4bceb22-9325-4345-aea0-c2a251183b10"). InnerVolumeSpecName "kube-api-access-f4pb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.281555 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4pb8\" (UniqueName: \"kubernetes.io/projected/e4bceb22-9325-4345-aea0-c2a251183b10-kube-api-access-f4pb8\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.746053 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" event={"ID":"e4bceb22-9325-4345-aea0-c2a251183b10","Type":"ContainerDied","Data":"983bcb86fdd183779c1a9ed028d28d2e02b2b9cb8cfd6179ce1a034ea9e711a3"} Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.746115 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983bcb86fdd183779c1a9ed028d28d2e02b2b9cb8cfd6179ce1a034ea9e711a3" Jan 21 09:40:05 crc kubenswrapper[5113]: I0121 09:40:05.746271 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483140-k6zdk" Jan 21 09:40:06 crc kubenswrapper[5113]: I0121 09:40:06.112046 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483134-qxqjm"] Jan 21 09:40:06 crc kubenswrapper[5113]: I0121 09:40:06.117487 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483134-qxqjm"] Jan 21 09:40:06 crc kubenswrapper[5113]: I0121 09:40:06.855146 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337657a4-5e17-4e6a-9174-6b7e1ecdb9ee" path="/var/lib/kubelet/pods/337657a4-5e17-4e6a-9174-6b7e1ecdb9ee/volumes" Jan 21 09:40:19 crc kubenswrapper[5113]: I0121 09:40:19.530595 5113 generic.go:358] "Generic (PLEG): container finished" podID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerID="02e37e2bd0f68d8f48a821ab972e0da1948e9096890257e78a6eec8d8fb33867" exitCode=0 Jan 21 09:40:19 crc kubenswrapper[5113]: I0121 09:40:19.531900 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerDied","Data":"02e37e2bd0f68d8f48a821ab972e0da1948e9096890257e78a6eec8d8fb33867"} Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.887118 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942310 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942382 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942482 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942561 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942657 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942748 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942802 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.942878 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.943022 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.943747 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.943817 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.943974 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.944195 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.944260 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvwx\" (UniqueName: \"kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.944401 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.945107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles\") pod \"c985ccf6-8457-45c1-acdc-667628d80d5f\" (UID: \"c985ccf6-8457-45c1-acdc-667628d80d5f\") " Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.944989 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946015 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946532 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946568 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946586 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c985ccf6-8457-45c1-acdc-667628d80d5f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946603 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946619 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.946637 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c985ccf6-8457-45c1-acdc-667628d80d5f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.947468 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.952053 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.952084 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:40:20 crc kubenswrapper[5113]: I0121 09:40:20.952207 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx" (OuterVolumeSpecName: "kube-api-access-qdvwx") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "kube-api-access-qdvwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.047998 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.048055 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.048084 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdvwx\" (UniqueName: \"kubernetes.io/projected/c985ccf6-8457-45c1-acdc-667628d80d5f-kube-api-access-qdvwx\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.048109 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/c985ccf6-8457-45c1-acdc-667628d80d5f-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.127612 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.149912 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.549975 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"c985ccf6-8457-45c1-acdc-667628d80d5f","Type":"ContainerDied","Data":"61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11"} Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.550036 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61be85008a755a93624d866b427a42668f33b6afb920fadfa091018b400bec11" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.550143 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.853947 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c985ccf6-8457-45c1-acdc-667628d80d5f" (UID: "c985ccf6-8457-45c1-acdc-667628d80d5f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:21 crc kubenswrapper[5113]: I0121 09:40:21.858702 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c985ccf6-8457-45c1-acdc-667628d80d5f-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.485381 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486088 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4bceb22-9325-4345-aea0-c2a251183b10" containerName="oc" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486105 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bceb22-9325-4345-aea0-c2a251183b10" containerName="oc" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486124 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="docker-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486132 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="docker-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486156 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="manage-dockerfile" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486164 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="manage-dockerfile" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486192 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="git-clone" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486199 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="git-clone" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486305 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4bceb22-9325-4345-aea0-c2a251183b10" containerName="oc" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.486320 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c985ccf6-8457-45c1-acdc-667628d80d5f" containerName="docker-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.559040 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.559294 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.561939 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.563242 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.563442 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.563613 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618005 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618056 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618079 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618201 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618250 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618385 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618533 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618596 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618627 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618744 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.618797 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7n98\" (UniqueName: \"kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.719968 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720048 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720098 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720157 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720200 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720253 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720316 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720352 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720411 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720464 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720498 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720543 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.720581 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j7n98\" (UniqueName: \"kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721038 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721249 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721430 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721819 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721850 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.721989 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.722124 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.724509 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.728416 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.731981 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.736429 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7n98\" (UniqueName: \"kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:25 crc kubenswrapper[5113]: I0121 09:40:25.878459 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:26 crc kubenswrapper[5113]: I0121 09:40:26.351326 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:26 crc kubenswrapper[5113]: I0121 09:40:26.595387 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"15cc5780-4a4a-4b8d-be56-5de00e7950f5","Type":"ContainerStarted","Data":"eca5aa741156606c8c6b16ab423e091805302dc565a7323c86fcc1074d8bb75b"} Jan 21 09:40:27 crc kubenswrapper[5113]: I0121 09:40:27.607055 5113 generic.go:358] "Generic (PLEG): container finished" podID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerID="ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142" exitCode=0 Jan 21 09:40:27 crc kubenswrapper[5113]: I0121 09:40:27.607126 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"15cc5780-4a4a-4b8d-be56-5de00e7950f5","Type":"ContainerDied","Data":"ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142"} Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.340048 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.340406 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.340462 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.341009 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.341085 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756" gracePeriod=600 Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.618025 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"15cc5780-4a4a-4b8d-be56-5de00e7950f5","Type":"ContainerStarted","Data":"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215"} Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.622041 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756" exitCode=0 Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.622152 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756"} Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.622181 5113 scope.go:117] "RemoveContainer" containerID="4cea019751a422b4c0c4aa18b701d7b4d78cd1315667ce00f2f2720f1584251c" Jan 21 09:40:28 crc kubenswrapper[5113]: I0121 09:40:28.660988 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.6609661940000002 podStartE2EDuration="3.660966194s" podCreationTimestamp="2026-01-21 09:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:40:28.651260278 +0000 UTC m=+1358.152087347" watchObservedRunningTime="2026-01-21 09:40:28.660966194 +0000 UTC m=+1358.161793253" Jan 21 09:40:29 crc kubenswrapper[5113]: I0121 09:40:29.630340 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf"} Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.687650 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.694365 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.702659 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.811120 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.811418 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.811637 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc8lw\" (UniqueName: \"kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.912569 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.912808 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gc8lw\" (UniqueName: \"kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.912870 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.913570 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.914018 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:30 crc kubenswrapper[5113]: I0121 09:40:30.947690 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc8lw\" (UniqueName: \"kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw\") pod \"community-operators-nk8kv\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:31 crc kubenswrapper[5113]: I0121 09:40:31.028375 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:31 crc kubenswrapper[5113]: I0121 09:40:31.318044 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:31 crc kubenswrapper[5113]: W0121 09:40:31.321826 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c5082a_1e8d_4026_9918_4daffe96da53.slice/crio-68b6e56175f05d962f61588ec5656a6734a01739acf9c7928019f5464253ad19 WatchSource:0}: Error finding container 68b6e56175f05d962f61588ec5656a6734a01739acf9c7928019f5464253ad19: Status 404 returned error can't find the container with id 68b6e56175f05d962f61588ec5656a6734a01739acf9c7928019f5464253ad19 Jan 21 09:40:31 crc kubenswrapper[5113]: I0121 09:40:31.645054 5113 generic.go:358] "Generic (PLEG): container finished" podID="31c5082a-1e8d-4026-9918-4daffe96da53" containerID="54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5" exitCode=0 Jan 21 09:40:31 crc kubenswrapper[5113]: I0121 09:40:31.645441 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerDied","Data":"54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5"} Jan 21 09:40:31 crc kubenswrapper[5113]: I0121 09:40:31.645471 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerStarted","Data":"68b6e56175f05d962f61588ec5656a6734a01739acf9c7928019f5464253ad19"} Jan 21 09:40:32 crc kubenswrapper[5113]: I0121 09:40:32.655110 5113 generic.go:358] "Generic (PLEG): container finished" podID="31c5082a-1e8d-4026-9918-4daffe96da53" containerID="e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4" exitCode=0 Jan 21 09:40:32 crc kubenswrapper[5113]: I0121 09:40:32.655179 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerDied","Data":"e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4"} Jan 21 09:40:34 crc kubenswrapper[5113]: I0121 09:40:34.670683 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerStarted","Data":"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a"} Jan 21 09:40:34 crc kubenswrapper[5113]: I0121 09:40:34.695501 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nk8kv" podStartSLOduration=4.142907937 podStartE2EDuration="4.695479747s" podCreationTimestamp="2026-01-21 09:40:30 +0000 UTC" firstStartedPulling="2026-01-21 09:40:31.646431101 +0000 UTC m=+1361.147258170" lastFinishedPulling="2026-01-21 09:40:32.199002921 +0000 UTC m=+1361.699829980" observedRunningTime="2026-01-21 09:40:34.69029015 +0000 UTC m=+1364.191117209" watchObservedRunningTime="2026-01-21 09:40:34.695479747 +0000 UTC m=+1364.196306806" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.144530 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.145204 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="docker-build" containerID="cri-o://3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215" gracePeriod=30 Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.673883 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_15cc5780-4a4a-4b8d-be56-5de00e7950f5/docker-build/0.log" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.674684 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.685510 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_15cc5780-4a4a-4b8d-be56-5de00e7950f5/docker-build/0.log" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.686360 5113 generic.go:358] "Generic (PLEG): container finished" podID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerID="3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215" exitCode=1 Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.686501 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.686457 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"15cc5780-4a4a-4b8d-be56-5de00e7950f5","Type":"ContainerDied","Data":"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215"} Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.686685 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"15cc5780-4a4a-4b8d-be56-5de00e7950f5","Type":"ContainerDied","Data":"eca5aa741156606c8c6b16ab423e091805302dc565a7323c86fcc1074d8bb75b"} Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.686712 5113 scope.go:117] "RemoveContainer" containerID="3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724355 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724500 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7n98\" (UniqueName: \"kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724496 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724552 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724602 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724647 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724703 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724821 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724869 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.724937 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725014 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725059 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725109 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725143 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles\") pod \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\" (UID: \"15cc5780-4a4a-4b8d-be56-5de00e7950f5\") " Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725393 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725671 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725691 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.725704 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.726416 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.726682 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.727345 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.728036 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.735126 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.739998 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.740030 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98" (OuterVolumeSpecName: "kube-api-access-j7n98") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "kube-api-access-j7n98". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.761084 5113 scope.go:117] "RemoveContainer" containerID="ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.797120 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826605 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j7n98\" (UniqueName: \"kubernetes.io/projected/15cc5780-4a4a-4b8d-be56-5de00e7950f5-kube-api-access-j7n98\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826648 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826658 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826666 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826675 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/15cc5780-4a4a-4b8d-be56-5de00e7950f5-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826686 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826694 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.826702 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/15cc5780-4a4a-4b8d-be56-5de00e7950f5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.838781 5113 scope.go:117] "RemoveContainer" containerID="3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215" Jan 21 09:40:36 crc kubenswrapper[5113]: E0121 09:40:36.839180 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215\": container with ID starting with 3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215 not found: ID does not exist" containerID="3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.839216 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215"} err="failed to get container status \"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215\": rpc error: code = NotFound desc = could not find container \"3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215\": container with ID starting with 3e1fb3a4ec793ede91d2f5cf5c322dfb872db28c68da81b0e419c20517b1e215 not found: ID does not exist" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.839241 5113 scope.go:117] "RemoveContainer" containerID="ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142" Jan 21 09:40:36 crc kubenswrapper[5113]: E0121 09:40:36.839852 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142\": container with ID starting with ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142 not found: ID does not exist" containerID="ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.839896 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142"} err="failed to get container status \"ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142\": rpc error: code = NotFound desc = could not find container \"ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142\": container with ID starting with ae222db2966cffb371c92685166e05c23e7397b7006ac1fa0edb36db41cc9142 not found: ID does not exist" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.873403 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "15cc5780-4a4a-4b8d-be56-5de00e7950f5" (UID: "15cc5780-4a4a-4b8d-be56-5de00e7950f5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:36 crc kubenswrapper[5113]: I0121 09:40:36.928030 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/15cc5780-4a4a-4b8d-be56-5de00e7950f5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.033703 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.039598 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.825266 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.827968 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="docker-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.828111 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="docker-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.828240 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="manage-dockerfile" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.828334 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="manage-dockerfile" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.828604 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" containerName="docker-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.834135 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.836960 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.837273 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.838340 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.841938 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-xwwzx\"" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.842237 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.942928 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.942974 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.942993 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943033 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943051 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943070 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943088 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943104 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943123 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnp2\" (UniqueName: \"kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943156 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943176 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:37 crc kubenswrapper[5113]: I0121 09:40:37.943222 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.044633 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.044699 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045012 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045132 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045184 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045300 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045441 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045493 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045514 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045553 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045608 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045674 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnp2\" (UniqueName: \"kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045803 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045860 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.045995 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.046150 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.046421 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.046472 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.046577 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.046780 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.048129 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.052902 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.059525 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.075573 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnp2\" (UniqueName: \"kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.180662 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.698477 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.717679 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerStarted","Data":"00c400e469c71f696f1d9e185df80b5c9eb9978c22988d271fa0fe668b01cbe5"} Jan 21 09:40:38 crc kubenswrapper[5113]: I0121 09:40:38.856297 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cc5780-4a4a-4b8d-be56-5de00e7950f5" path="/var/lib/kubelet/pods/15cc5780-4a4a-4b8d-be56-5de00e7950f5/volumes" Jan 21 09:40:39 crc kubenswrapper[5113]: I0121 09:40:39.728315 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerStarted","Data":"a25660fa6b0b7b9550a5b7457ebebe332276c762c929a403da60610d29cee0e6"} Jan 21 09:40:39 crc kubenswrapper[5113]: E0121 09:40:39.900262 5113 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.181:44286->38.102.83.181:34439: write tcp 38.102.83.181:44286->38.102.83.181:34439: write: connection reset by peer Jan 21 09:40:40 crc kubenswrapper[5113]: E0121 09:40:40.035395 5113 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f329e83_b6df_4338_bd89_08e3346dadf3.slice/crio-conmon-a25660fa6b0b7b9550a5b7457ebebe332276c762c929a403da60610d29cee0e6.scope\": RecentStats: unable to find data in memory cache]" Jan 21 09:40:40 crc kubenswrapper[5113]: I0121 09:40:40.736657 5113 generic.go:358] "Generic (PLEG): container finished" podID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerID="a25660fa6b0b7b9550a5b7457ebebe332276c762c929a403da60610d29cee0e6" exitCode=0 Jan 21 09:40:40 crc kubenswrapper[5113]: I0121 09:40:40.736788 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerDied","Data":"a25660fa6b0b7b9550a5b7457ebebe332276c762c929a403da60610d29cee0e6"} Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.034023 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.034311 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.079415 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.747794 5113 generic.go:358] "Generic (PLEG): container finished" podID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerID="8cf7c210ac0b13182096806fcc4fb6414cb1c457bf89e6cb2ccce5bced616d17" exitCode=0 Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.747864 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerDied","Data":"8cf7c210ac0b13182096806fcc4fb6414cb1c457bf89e6cb2ccce5bced616d17"} Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.801650 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_5f329e83-b6df-4338-bd89-08e3346dadf3/manage-dockerfile/0.log" Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.816612 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:41 crc kubenswrapper[5113]: I0121 09:40:41.877368 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:42 crc kubenswrapper[5113]: I0121 09:40:42.762138 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerStarted","Data":"1758897591d36b0421ac9b26aeabb1d4880af1ed91b388e45b24e4942e9d604f"} Jan 21 09:40:42 crc kubenswrapper[5113]: I0121 09:40:42.805090 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.805068007 podStartE2EDuration="5.805068007s" podCreationTimestamp="2026-01-21 09:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:40:42.798702567 +0000 UTC m=+1372.299529626" watchObservedRunningTime="2026-01-21 09:40:42.805068007 +0000 UTC m=+1372.305895066" Jan 21 09:40:43 crc kubenswrapper[5113]: I0121 09:40:43.770151 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nk8kv" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="registry-server" containerID="cri-o://2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a" gracePeriod=2 Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.165842 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.243660 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities\") pod \"31c5082a-1e8d-4026-9918-4daffe96da53\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.243755 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content\") pod \"31c5082a-1e8d-4026-9918-4daffe96da53\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.243841 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc8lw\" (UniqueName: \"kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw\") pod \"31c5082a-1e8d-4026-9918-4daffe96da53\" (UID: \"31c5082a-1e8d-4026-9918-4daffe96da53\") " Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.244878 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities" (OuterVolumeSpecName: "utilities") pod "31c5082a-1e8d-4026-9918-4daffe96da53" (UID: "31c5082a-1e8d-4026-9918-4daffe96da53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.250838 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw" (OuterVolumeSpecName: "kube-api-access-gc8lw") pod "31c5082a-1e8d-4026-9918-4daffe96da53" (UID: "31c5082a-1e8d-4026-9918-4daffe96da53"). InnerVolumeSpecName "kube-api-access-gc8lw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.294612 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c5082a-1e8d-4026-9918-4daffe96da53" (UID: "31c5082a-1e8d-4026-9918-4daffe96da53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.345630 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.345667 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c5082a-1e8d-4026-9918-4daffe96da53-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.345683 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gc8lw\" (UniqueName: \"kubernetes.io/projected/31c5082a-1e8d-4026-9918-4daffe96da53-kube-api-access-gc8lw\") on node \"crc\" DevicePath \"\"" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.781351 5113 generic.go:358] "Generic (PLEG): container finished" podID="31c5082a-1e8d-4026-9918-4daffe96da53" containerID="2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a" exitCode=0 Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.781724 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nk8kv" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.781575 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerDied","Data":"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a"} Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.781946 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nk8kv" event={"ID":"31c5082a-1e8d-4026-9918-4daffe96da53","Type":"ContainerDied","Data":"68b6e56175f05d962f61588ec5656a6734a01739acf9c7928019f5464253ad19"} Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.781977 5113 scope.go:117] "RemoveContainer" containerID="2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.807046 5113 scope.go:117] "RemoveContainer" containerID="e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.831358 5113 scope.go:117] "RemoveContainer" containerID="54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.857982 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.858025 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nk8kv"] Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.863461 5113 scope.go:117] "RemoveContainer" containerID="2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a" Jan 21 09:40:44 crc kubenswrapper[5113]: E0121 09:40:44.864179 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a\": container with ID starting with 2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a not found: ID does not exist" containerID="2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.864229 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a"} err="failed to get container status \"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a\": rpc error: code = NotFound desc = could not find container \"2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a\": container with ID starting with 2297cfc4e73b27d77c056c6c5c790bc026ea2a36f166a5f5e3ae0820bc19656a not found: ID does not exist" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.864276 5113 scope.go:117] "RemoveContainer" containerID="e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4" Jan 21 09:40:44 crc kubenswrapper[5113]: E0121 09:40:44.864657 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4\": container with ID starting with e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4 not found: ID does not exist" containerID="e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.864697 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4"} err="failed to get container status \"e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4\": rpc error: code = NotFound desc = could not find container \"e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4\": container with ID starting with e45e93f560517d00a709a21f155fb52aec6d45dfdcb4ad0764176f3e1d9144d4 not found: ID does not exist" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.864720 5113 scope.go:117] "RemoveContainer" containerID="54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5" Jan 21 09:40:44 crc kubenswrapper[5113]: E0121 09:40:44.865288 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5\": container with ID starting with 54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5 not found: ID does not exist" containerID="54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5" Jan 21 09:40:44 crc kubenswrapper[5113]: I0121 09:40:44.865333 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5"} err="failed to get container status \"54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5\": rpc error: code = NotFound desc = could not find container \"54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5\": container with ID starting with 54f3df6d87c0ae48a83e0b68271fd2cd36374f5e63a5618c29d2b2cf1b27a5b5 not found: ID does not exist" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.722454 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723344 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="extract-utilities" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723360 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="extract-utilities" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723383 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="extract-content" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723389 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="extract-content" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723400 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="registry-server" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723407 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="registry-server" Jan 21 09:40:46 crc kubenswrapper[5113]: I0121 09:40:46.723497 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" containerName="registry-server" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.331805 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.331866 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.345536 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c5082a-1e8d-4026-9918-4daffe96da53" path="/var/lib/kubelet/pods/31c5082a-1e8d-4026-9918-4daffe96da53/volumes" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.495800 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9v86\" (UniqueName: \"kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.496355 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.496501 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.598346 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.598416 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.598462 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x9v86\" (UniqueName: \"kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.601831 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.603947 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.623325 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9v86\" (UniqueName: \"kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86\") pod \"certified-operators-wk6bg\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.655802 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:47 crc kubenswrapper[5113]: I0121 09:40:47.979392 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:40:47 crc kubenswrapper[5113]: W0121 09:40:47.981340 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc8474a_e9f1_4abf_bde3_01d8bd5a14b9.slice/crio-52645b07d6639a4152b22ed0c419c17cda93579777dc275b4f4b8f4c2189970f WatchSource:0}: Error finding container 52645b07d6639a4152b22ed0c419c17cda93579777dc275b4f4b8f4c2189970f: Status 404 returned error can't find the container with id 52645b07d6639a4152b22ed0c419c17cda93579777dc275b4f4b8f4c2189970f Jan 21 09:40:48 crc kubenswrapper[5113]: I0121 09:40:48.817663 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerStarted","Data":"52645b07d6639a4152b22ed0c419c17cda93579777dc275b4f4b8f4c2189970f"} Jan 21 09:40:49 crc kubenswrapper[5113]: I0121 09:40:49.827305 5113 generic.go:358] "Generic (PLEG): container finished" podID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerID="cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84" exitCode=0 Jan 21 09:40:49 crc kubenswrapper[5113]: I0121 09:40:49.827366 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerDied","Data":"cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84"} Jan 21 09:40:50 crc kubenswrapper[5113]: I0121 09:40:50.837247 5113 generic.go:358] "Generic (PLEG): container finished" podID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerID="aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54" exitCode=0 Jan 21 09:40:50 crc kubenswrapper[5113]: I0121 09:40:50.837302 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerDied","Data":"aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54"} Jan 21 09:40:51 crc kubenswrapper[5113]: I0121 09:40:51.845813 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerStarted","Data":"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac"} Jan 21 09:40:51 crc kubenswrapper[5113]: I0121 09:40:51.865060 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wk6bg" podStartSLOduration=5.339535023 podStartE2EDuration="5.865043728s" podCreationTimestamp="2026-01-21 09:40:46 +0000 UTC" firstStartedPulling="2026-01-21 09:40:49.828370979 +0000 UTC m=+1379.329198038" lastFinishedPulling="2026-01-21 09:40:50.353879694 +0000 UTC m=+1379.854706743" observedRunningTime="2026-01-21 09:40:51.860922392 +0000 UTC m=+1381.361749441" watchObservedRunningTime="2026-01-21 09:40:51.865043728 +0000 UTC m=+1381.365870777" Jan 21 09:40:52 crc kubenswrapper[5113]: I0121 09:40:52.629960 5113 scope.go:117] "RemoveContainer" containerID="70b2617a4ff231b4a20fa3850daa4bdadbee6c5630f828eae44ac52e1ac8e587" Jan 21 09:40:57 crc kubenswrapper[5113]: I0121 09:40:57.656948 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:57 crc kubenswrapper[5113]: I0121 09:40:57.657624 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:57 crc kubenswrapper[5113]: I0121 09:40:57.726557 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:57 crc kubenswrapper[5113]: I0121 09:40:57.936150 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:40:57 crc kubenswrapper[5113]: I0121 09:40:57.991159 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:40:59 crc kubenswrapper[5113]: I0121 09:40:59.908696 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wk6bg" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="registry-server" containerID="cri-o://78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac" gracePeriod=2 Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.422126 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.499230 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9v86\" (UniqueName: \"kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86\") pod \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.499344 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content\") pod \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.499550 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities\") pod \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\" (UID: \"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9\") " Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.500420 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities" (OuterVolumeSpecName: "utilities") pod "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" (UID: "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.508127 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86" (OuterVolumeSpecName: "kube-api-access-x9v86") pod "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" (UID: "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9"). InnerVolumeSpecName "kube-api-access-x9v86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.557371 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" (UID: "bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.601622 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x9v86\" (UniqueName: \"kubernetes.io/projected/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-kube-api-access-x9v86\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.601676 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.601697 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.920716 5113 generic.go:358] "Generic (PLEG): container finished" podID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerID="78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac" exitCode=0 Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.920836 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerDied","Data":"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac"} Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.921203 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wk6bg" event={"ID":"bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9","Type":"ContainerDied","Data":"52645b07d6639a4152b22ed0c419c17cda93579777dc275b4f4b8f4c2189970f"} Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.921242 5113 scope.go:117] "RemoveContainer" containerID="78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.920877 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wk6bg" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.950641 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.957482 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wk6bg"] Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.959859 5113 scope.go:117] "RemoveContainer" containerID="aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54" Jan 21 09:41:00 crc kubenswrapper[5113]: I0121 09:41:00.980096 5113 scope.go:117] "RemoveContainer" containerID="cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.016325 5113 scope.go:117] "RemoveContainer" containerID="78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac" Jan 21 09:41:01 crc kubenswrapper[5113]: E0121 09:41:01.017194 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac\": container with ID starting with 78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac not found: ID does not exist" containerID="78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.017316 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac"} err="failed to get container status \"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac\": rpc error: code = NotFound desc = could not find container \"78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac\": container with ID starting with 78326e2b57771132bc27a005cc92b1b32e03e92adc2eb057c8a41564d4d6b9ac not found: ID does not exist" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.017357 5113 scope.go:117] "RemoveContainer" containerID="aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54" Jan 21 09:41:01 crc kubenswrapper[5113]: E0121 09:41:01.017796 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54\": container with ID starting with aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54 not found: ID does not exist" containerID="aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.017859 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54"} err="failed to get container status \"aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54\": rpc error: code = NotFound desc = could not find container \"aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54\": container with ID starting with aa39ae535a622183afba4d3d547afa80cd2788c8b166d7d609db93cc4e1c1a54 not found: ID does not exist" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.017881 5113 scope.go:117] "RemoveContainer" containerID="cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84" Jan 21 09:41:01 crc kubenswrapper[5113]: E0121 09:41:01.018154 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84\": container with ID starting with cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84 not found: ID does not exist" containerID="cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84" Jan 21 09:41:01 crc kubenswrapper[5113]: I0121 09:41:01.018177 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84"} err="failed to get container status \"cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84\": rpc error: code = NotFound desc = could not find container \"cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84\": container with ID starting with cda7e86c69032ffb41103b7a979ee91e41cb33632ce72c099e0b9a3bf4888e84 not found: ID does not exist" Jan 21 09:41:02 crc kubenswrapper[5113]: I0121 09:41:02.855177 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" path="/var/lib/kubelet/pods/bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9/volumes" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.385135 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386164 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="extract-utilities" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386184 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="extract-utilities" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386218 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="extract-content" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386229 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="extract-content" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386255 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="registry-server" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386267 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="registry-server" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.386442 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="bfc8474a-e9f1-4abf-bde3-01d8bd5a14b9" containerName="registry-server" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.393290 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.418595 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.457389 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bqld\" (UniqueName: \"kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.457453 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.457483 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.560036 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bqld\" (UniqueName: \"kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.560222 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.560283 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.560944 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.561181 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.592021 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bqld\" (UniqueName: \"kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld\") pod \"redhat-operators-b9q8j\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:04 crc kubenswrapper[5113]: I0121 09:41:04.718766 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:05 crc kubenswrapper[5113]: I0121 09:41:05.214005 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:05 crc kubenswrapper[5113]: I0121 09:41:05.961818 5113 generic.go:358] "Generic (PLEG): container finished" podID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerID="32ca0d0f2d72f9fd39164778adb2946591f0c9d18cd6acb642f76167e54dc83b" exitCode=0 Jan 21 09:41:05 crc kubenswrapper[5113]: I0121 09:41:05.961925 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerDied","Data":"32ca0d0f2d72f9fd39164778adb2946591f0c9d18cd6acb642f76167e54dc83b"} Jan 21 09:41:05 crc kubenswrapper[5113]: I0121 09:41:05.962170 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerStarted","Data":"51e4268dfff886bc8a584fe6abbacc96fcf32f01f7e87990085d37f98d15b67d"} Jan 21 09:41:06 crc kubenswrapper[5113]: I0121 09:41:06.975111 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerStarted","Data":"bb1546b4f233804ec3cb1fd95230dafcdcdabfd10f4fb494efe4227b4d5fa7dc"} Jan 21 09:41:07 crc kubenswrapper[5113]: I0121 09:41:07.989067 5113 generic.go:358] "Generic (PLEG): container finished" podID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerID="bb1546b4f233804ec3cb1fd95230dafcdcdabfd10f4fb494efe4227b4d5fa7dc" exitCode=0 Jan 21 09:41:07 crc kubenswrapper[5113]: I0121 09:41:07.990463 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerDied","Data":"bb1546b4f233804ec3cb1fd95230dafcdcdabfd10f4fb494efe4227b4d5fa7dc"} Jan 21 09:41:09 crc kubenswrapper[5113]: I0121 09:41:09.004317 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerStarted","Data":"7fb07137f23a9ba79263366b90606063c83123256577286ab26217358bae6085"} Jan 21 09:41:09 crc kubenswrapper[5113]: I0121 09:41:09.029546 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9q8j" podStartSLOduration=4.295744362 podStartE2EDuration="5.029527848s" podCreationTimestamp="2026-01-21 09:41:04 +0000 UTC" firstStartedPulling="2026-01-21 09:41:05.963229604 +0000 UTC m=+1395.464056693" lastFinishedPulling="2026-01-21 09:41:06.6970131 +0000 UTC m=+1396.197840179" observedRunningTime="2026-01-21 09:41:09.026792851 +0000 UTC m=+1398.527619900" watchObservedRunningTime="2026-01-21 09:41:09.029527848 +0000 UTC m=+1398.530354897" Jan 21 09:41:14 crc kubenswrapper[5113]: I0121 09:41:14.719085 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:14 crc kubenswrapper[5113]: I0121 09:41:14.721379 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:14 crc kubenswrapper[5113]: I0121 09:41:14.780322 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:15 crc kubenswrapper[5113]: I0121 09:41:15.116517 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:15 crc kubenswrapper[5113]: I0121 09:41:15.185577 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:17 crc kubenswrapper[5113]: I0121 09:41:17.061539 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b9q8j" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="registry-server" containerID="cri-o://7fb07137f23a9ba79263366b90606063c83123256577286ab26217358bae6085" gracePeriod=2 Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.071715 5113 generic.go:358] "Generic (PLEG): container finished" podID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerID="7fb07137f23a9ba79263366b90606063c83123256577286ab26217358bae6085" exitCode=0 Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.071785 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerDied","Data":"7fb07137f23a9ba79263366b90606063c83123256577286ab26217358bae6085"} Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.072398 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9q8j" event={"ID":"065a28b4-f78d-48c2-843c-53d51595e1a3","Type":"ContainerDied","Data":"51e4268dfff886bc8a584fe6abbacc96fcf32f01f7e87990085d37f98d15b67d"} Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.072446 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e4268dfff886bc8a584fe6abbacc96fcf32f01f7e87990085d37f98d15b67d" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.110670 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.187629 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bqld\" (UniqueName: \"kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld\") pod \"065a28b4-f78d-48c2-843c-53d51595e1a3\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.187980 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content\") pod \"065a28b4-f78d-48c2-843c-53d51595e1a3\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.188102 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities\") pod \"065a28b4-f78d-48c2-843c-53d51595e1a3\" (UID: \"065a28b4-f78d-48c2-843c-53d51595e1a3\") " Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.190158 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities" (OuterVolumeSpecName: "utilities") pod "065a28b4-f78d-48c2-843c-53d51595e1a3" (UID: "065a28b4-f78d-48c2-843c-53d51595e1a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.199982 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld" (OuterVolumeSpecName: "kube-api-access-5bqld") pod "065a28b4-f78d-48c2-843c-53d51595e1a3" (UID: "065a28b4-f78d-48c2-843c-53d51595e1a3"). InnerVolumeSpecName "kube-api-access-5bqld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.289212 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.289252 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bqld\" (UniqueName: \"kubernetes.io/projected/065a28b4-f78d-48c2-843c-53d51595e1a3-kube-api-access-5bqld\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.304326 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "065a28b4-f78d-48c2-843c-53d51595e1a3" (UID: "065a28b4-f78d-48c2-843c-53d51595e1a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:18 crc kubenswrapper[5113]: I0121 09:41:18.395695 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/065a28b4-f78d-48c2-843c-53d51595e1a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:19 crc kubenswrapper[5113]: I0121 09:41:19.572785 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9q8j" Jan 21 09:41:19 crc kubenswrapper[5113]: I0121 09:41:19.607233 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:19 crc kubenswrapper[5113]: I0121 09:41:19.613607 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b9q8j"] Jan 21 09:41:20 crc kubenswrapper[5113]: I0121 09:41:20.854919 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" path="/var/lib/kubelet/pods/065a28b4-f78d-48c2-843c-53d51595e1a3/volumes" Jan 21 09:41:34 crc kubenswrapper[5113]: I0121 09:41:34.696920 5113 generic.go:358] "Generic (PLEG): container finished" podID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerID="1758897591d36b0421ac9b26aeabb1d4880af1ed91b388e45b24e4942e9d604f" exitCode=0 Jan 21 09:41:34 crc kubenswrapper[5113]: I0121 09:41:34.697011 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerDied","Data":"1758897591d36b0421ac9b26aeabb1d4880af1ed91b388e45b24e4942e9d604f"} Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.000880 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.023927 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.023997 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.024032 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.024062 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.024093 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.024108 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnp2\" (UniqueName: \"kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.024177 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025188 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025204 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025503 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025535 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025556 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025582 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025597 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025637 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull\") pod \"5f329e83-b6df-4338-bd89-08e3346dadf3\" (UID: \"5f329e83-b6df-4338-bd89-08e3346dadf3\") " Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025908 5113 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.026276 5113 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.026295 5113 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025912 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.025932 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.026577 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.032767 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2" (OuterVolumeSpecName: "kube-api-access-xnnp2") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "kube-api-access-xnnp2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.032885 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-push") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "builder-dockercfg-xwwzx-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.034671 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.035265 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull" (OuterVolumeSpecName: "builder-dockercfg-xwwzx-pull") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "builder-dockercfg-xwwzx-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128862 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128910 5113 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5f329e83-b6df-4338-bd89-08e3346dadf3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128923 5113 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128936 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-pull\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-pull\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128951 5113 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5f329e83-b6df-4338-bd89-08e3346dadf3-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128960 5113 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-xwwzx-push\" (UniqueName: \"kubernetes.io/secret/5f329e83-b6df-4338-bd89-08e3346dadf3-builder-dockercfg-xwwzx-push\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.128970 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnnp2\" (UniqueName: \"kubernetes.io/projected/5f329e83-b6df-4338-bd89-08e3346dadf3-kube-api-access-xnnp2\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.145974 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.230814 5113 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.721114 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.721132 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"5f329e83-b6df-4338-bd89-08e3346dadf3","Type":"ContainerDied","Data":"00c400e469c71f696f1d9e185df80b5c9eb9978c22988d271fa0fe668b01cbe5"} Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.721189 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c400e469c71f696f1d9e185df80b5c9eb9978c22988d271fa0fe668b01cbe5" Jan 21 09:41:36 crc kubenswrapper[5113]: I0121 09:41:36.969269 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5f329e83-b6df-4338-bd89-08e3346dadf3" (UID: "5f329e83-b6df-4338-bd89-08e3346dadf3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:41:37 crc kubenswrapper[5113]: I0121 09:41:37.040987 5113 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5f329e83-b6df-4338-bd89-08e3346dadf3-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.051104 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5688757f5c-tvmkz"] Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053233 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="extract-content" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053307 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="extract-content" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053335 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="manage-dockerfile" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053348 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="manage-dockerfile" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053368 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="extract-utilities" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053379 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="extract-utilities" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053397 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="registry-server" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053407 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="registry-server" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053436 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="docker-build" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053447 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="docker-build" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053476 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="git-clone" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053489 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="git-clone" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053679 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="5f329e83-b6df-4338-bd89-08e3346dadf3" containerName="docker-build" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.053702 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="065a28b4-f78d-48c2-843c-53d51595e1a3" containerName="registry-server" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.075082 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5688757f5c-tvmkz"] Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.075312 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.079059 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-s78l6\"" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.151816 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn7t9\" (UniqueName: \"kubernetes.io/projected/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-kube-api-access-pn7t9\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.152081 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-runner\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.254257 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-runner\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.254416 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pn7t9\" (UniqueName: \"kubernetes.io/projected/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-kube-api-access-pn7t9\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.255224 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-runner\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.282212 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn7t9\" (UniqueName: \"kubernetes.io/projected/a0cf7e4c-4911-4f2f-8309-b3a890282b6e-kube-api-access-pn7t9\") pod \"smart-gateway-operator-5688757f5c-tvmkz\" (UID: \"a0cf7e4c-4911-4f2f-8309-b3a890282b6e\") " pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.402570 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" Jan 21 09:41:42 crc kubenswrapper[5113]: I0121 09:41:42.938592 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5688757f5c-tvmkz"] Jan 21 09:41:43 crc kubenswrapper[5113]: I0121 09:41:43.778380 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" event={"ID":"a0cf7e4c-4911-4f2f-8309-b3a890282b6e","Type":"ContainerStarted","Data":"f3d32352429d743d07ac4366f982e09a17c46c774ceff7730bdef6b2f6861029"} Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.452484 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6c4754584f-gmqc4"] Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.457405 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.461967 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-hnxsb\"" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.462590 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6c4754584f-gmqc4"] Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.600560 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7l5\" (UniqueName: \"kubernetes.io/projected/34e7f06e-075f-4ccf-a706-5a744ef37c25-kube-api-access-kd7l5\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.600645 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/34e7f06e-075f-4ccf-a706-5a744ef37c25-runner\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.702721 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/34e7f06e-075f-4ccf-a706-5a744ef37c25-runner\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.703334 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/34e7f06e-075f-4ccf-a706-5a744ef37c25-runner\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.703346 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7l5\" (UniqueName: \"kubernetes.io/projected/34e7f06e-075f-4ccf-a706-5a744ef37c25-kube-api-access-kd7l5\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.735442 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7l5\" (UniqueName: \"kubernetes.io/projected/34e7f06e-075f-4ccf-a706-5a744ef37c25-kube-api-access-kd7l5\") pod \"service-telemetry-operator-6c4754584f-gmqc4\" (UID: \"34e7f06e-075f-4ccf-a706-5a744ef37c25\") " pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:45 crc kubenswrapper[5113]: I0121 09:41:45.790588 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" Jan 21 09:41:53 crc kubenswrapper[5113]: I0121 09:41:53.837958 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6c4754584f-gmqc4"] Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.132644 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483142-5qvl9"] Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.177947 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483142-5qvl9"] Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.178099 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.183858 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.184109 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.184388 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.305685 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhf9\" (UniqueName: \"kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9\") pod \"auto-csr-approver-29483142-5qvl9\" (UID: \"3c331cef-9fee-4b59-b161-92a5cecbf022\") " pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.407584 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-prhf9\" (UniqueName: \"kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9\") pod \"auto-csr-approver-29483142-5qvl9\" (UID: \"3c331cef-9fee-4b59-b161-92a5cecbf022\") " pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.432220 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhf9\" (UniqueName: \"kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9\") pod \"auto-csr-approver-29483142-5qvl9\" (UID: \"3c331cef-9fee-4b59-b161-92a5cecbf022\") " pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:00 crc kubenswrapper[5113]: W0121 09:42:00.435774 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34e7f06e_075f_4ccf_a706_5a744ef37c25.slice/crio-f73f8731bce98d8bbb68f809d1cbbb3311e79ef7709a85421a1f73fd230c5645 WatchSource:0}: Error finding container f73f8731bce98d8bbb68f809d1cbbb3311e79ef7709a85421a1f73fd230c5645: Status 404 returned error can't find the container with id f73f8731bce98d8bbb68f809d1cbbb3311e79ef7709a85421a1f73fd230c5645 Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.496041 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:00 crc kubenswrapper[5113]: I0121 09:42:00.908986 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" event={"ID":"34e7f06e-075f-4ccf-a706-5a744ef37c25","Type":"ContainerStarted","Data":"f73f8731bce98d8bbb68f809d1cbbb3311e79ef7709a85421a1f73fd230c5645"} Jan 21 09:42:01 crc kubenswrapper[5113]: I0121 09:42:01.315330 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483142-5qvl9"] Jan 21 09:42:01 crc kubenswrapper[5113]: I0121 09:42:01.926289 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" event={"ID":"a0cf7e4c-4911-4f2f-8309-b3a890282b6e","Type":"ContainerStarted","Data":"d0a9a7314182f2721b1ae9478e204b44cac53e3e6d2146dbb656492b795a073c"} Jan 21 09:42:01 crc kubenswrapper[5113]: I0121 09:42:01.930335 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" event={"ID":"3c331cef-9fee-4b59-b161-92a5cecbf022","Type":"ContainerStarted","Data":"cc48e0cf292f2ae176bb206f2447eb0370df96a2d90e2a44631823191478493d"} Jan 21 09:42:01 crc kubenswrapper[5113]: I0121 09:42:01.944520 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5688757f5c-tvmkz" podStartSLOduration=1.880787086 podStartE2EDuration="19.94450234s" podCreationTimestamp="2026-01-21 09:41:42 +0000 UTC" firstStartedPulling="2026-01-21 09:41:42.947811286 +0000 UTC m=+1432.448638335" lastFinishedPulling="2026-01-21 09:42:01.01152653 +0000 UTC m=+1450.512353589" observedRunningTime="2026-01-21 09:42:01.942083072 +0000 UTC m=+1451.442910121" watchObservedRunningTime="2026-01-21 09:42:01.94450234 +0000 UTC m=+1451.445329389" Jan 21 09:42:02 crc kubenswrapper[5113]: I0121 09:42:02.940092 5113 generic.go:358] "Generic (PLEG): container finished" podID="3c331cef-9fee-4b59-b161-92a5cecbf022" containerID="0d6cc7ae66b3c4785b9e49b4a55e79bf5a2d53a6283d2a5b43974320e1586976" exitCode=0 Jan 21 09:42:02 crc kubenswrapper[5113]: I0121 09:42:02.940229 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" event={"ID":"3c331cef-9fee-4b59-b161-92a5cecbf022","Type":"ContainerDied","Data":"0d6cc7ae66b3c4785b9e49b4a55e79bf5a2d53a6283d2a5b43974320e1586976"} Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.154139 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.303252 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhf9\" (UniqueName: \"kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9\") pod \"3c331cef-9fee-4b59-b161-92a5cecbf022\" (UID: \"3c331cef-9fee-4b59-b161-92a5cecbf022\") " Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.310435 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9" (OuterVolumeSpecName: "kube-api-access-prhf9") pod "3c331cef-9fee-4b59-b161-92a5cecbf022" (UID: "3c331cef-9fee-4b59-b161-92a5cecbf022"). InnerVolumeSpecName "kube-api-access-prhf9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.404637 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-prhf9\" (UniqueName: \"kubernetes.io/projected/3c331cef-9fee-4b59-b161-92a5cecbf022-kube-api-access-prhf9\") on node \"crc\" DevicePath \"\"" Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.977923 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.977926 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483142-5qvl9" event={"ID":"3c331cef-9fee-4b59-b161-92a5cecbf022","Type":"ContainerDied","Data":"cc48e0cf292f2ae176bb206f2447eb0370df96a2d90e2a44631823191478493d"} Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.978366 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc48e0cf292f2ae176bb206f2447eb0370df96a2d90e2a44631823191478493d" Jan 21 09:42:06 crc kubenswrapper[5113]: I0121 09:42:06.980289 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" event={"ID":"34e7f06e-075f-4ccf-a706-5a744ef37c25","Type":"ContainerStarted","Data":"ee92307c8f1205cc26f4cb289881a9ebd44343f9fbd3178193603d87118e1416"} Jan 21 09:42:07 crc kubenswrapper[5113]: I0121 09:42:07.000305 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6c4754584f-gmqc4" podStartSLOduration=16.125416581 podStartE2EDuration="22.000288639s" podCreationTimestamp="2026-01-21 09:41:45 +0000 UTC" firstStartedPulling="2026-01-21 09:42:00.440704193 +0000 UTC m=+1449.941531272" lastFinishedPulling="2026-01-21 09:42:06.315576271 +0000 UTC m=+1455.816403330" observedRunningTime="2026-01-21 09:42:06.997088968 +0000 UTC m=+1456.497916047" watchObservedRunningTime="2026-01-21 09:42:07.000288639 +0000 UTC m=+1456.501115688" Jan 21 09:42:07 crc kubenswrapper[5113]: I0121 09:42:07.199298 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483136-wcps4"] Jan 21 09:42:07 crc kubenswrapper[5113]: I0121 09:42:07.203340 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483136-wcps4"] Jan 21 09:42:08 crc kubenswrapper[5113]: I0121 09:42:08.853005 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c" path="/var/lib/kubelet/pods/78bc24ef-816e-4958-bb7a-0ac6f7ce5b5c/volumes" Jan 21 09:42:27 crc kubenswrapper[5113]: I0121 09:42:27.461391 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:42:27 crc kubenswrapper[5113]: I0121 09:42:27.463725 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c331cef-9fee-4b59-b161-92a5cecbf022" containerName="oc" Jan 21 09:42:27 crc kubenswrapper[5113]: I0121 09:42:27.463778 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c331cef-9fee-4b59-b161-92a5cecbf022" containerName="oc" Jan 21 09:42:27 crc kubenswrapper[5113]: I0121 09:42:27.463921 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c331cef-9fee-4b59-b161-92a5cecbf022" containerName="oc" Jan 21 09:42:28 crc kubenswrapper[5113]: I0121 09:42:28.340511 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:42:28 crc kubenswrapper[5113]: I0121 09:42:28.341092 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.009507 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.013806 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.014171 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.014412 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.017853 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-6wx6p\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.018782 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.021985 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.022316 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.022806 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.107348 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.107540 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.107612 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.107830 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.107996 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.108076 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.108192 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vcn\" (UniqueName: \"kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209678 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209730 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209786 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209824 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209846 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209867 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vcn\" (UniqueName: \"kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.209907 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.211351 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.215891 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.216683 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.216780 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.216993 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.218279 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.229299 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vcn\" (UniqueName: \"kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn\") pod \"default-interconnect-55bf8d5cb-zmw4m\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.336257 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:42:29 crc kubenswrapper[5113]: W0121 09:42:29.638327 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7103133_6521_4b0f_a3ca_068d626b27d5.slice/crio-5d24566eaf38f451d453f890f873467703703a1b451ea1c4453216b1d397a1b5 WatchSource:0}: Error finding container 5d24566eaf38f451d453f890f873467703703a1b451ea1c4453216b1d397a1b5: Status 404 returned error can't find the container with id 5d24566eaf38f451d453f890f873467703703a1b451ea1c4453216b1d397a1b5 Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.641008 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:42:29 crc kubenswrapper[5113]: I0121 09:42:29.926180 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" event={"ID":"e7103133-6521-4b0f-a3ca-068d626b27d5","Type":"ContainerStarted","Data":"5d24566eaf38f451d453f890f873467703703a1b451ea1c4453216b1d397a1b5"} Jan 21 09:42:35 crc kubenswrapper[5113]: I0121 09:42:35.975578 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" event={"ID":"e7103133-6521-4b0f-a3ca-068d626b27d5","Type":"ContainerStarted","Data":"20fbeecef7e154783d70d6ad56ce965f7c272e9c9ec3451c31845496ff8b5eb6"} Jan 21 09:42:36 crc kubenswrapper[5113]: I0121 09:42:36.004023 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" podStartSLOduration=3.575724408 podStartE2EDuration="9.004002583s" podCreationTimestamp="2026-01-21 09:42:27 +0000 UTC" firstStartedPulling="2026-01-21 09:42:29.642692826 +0000 UTC m=+1479.143519885" lastFinishedPulling="2026-01-21 09:42:35.070971011 +0000 UTC m=+1484.571798060" observedRunningTime="2026-01-21 09:42:35.991251272 +0000 UTC m=+1485.492078351" watchObservedRunningTime="2026-01-21 09:42:36.004002583 +0000 UTC m=+1485.504829632" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.236059 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.255146 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.259979 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.260778 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.261305 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.261878 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.261886 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.262201 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.262515 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.262718 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.262992 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-hjxf6\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.264410 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.265235 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397041 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397087 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-tls-assets\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397116 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7daab145-3025-4d93-bb61-8921bd849a13-config-out\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397133 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397164 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5482be5f-4772-460a-8d5b-97f3027f321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5482be5f-4772-460a-8d5b-97f3027f321a\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397289 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397521 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjw6\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-kube-api-access-xjjw6\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397657 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-web-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397754 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397800 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397826 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.397848 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499335 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499418 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-tls-assets\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499472 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7daab145-3025-4d93-bb61-8921bd849a13-config-out\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499511 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499575 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-5482be5f-4772-460a-8d5b-97f3027f321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5482be5f-4772-460a-8d5b-97f3027f321a\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499620 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499687 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjw6\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-kube-api-access-xjjw6\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499780 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-web-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499835 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499905 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499944 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.499976 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: E0121 09:42:40.500975 5113 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 21 09:42:40 crc kubenswrapper[5113]: E0121 09:42:40.501082 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls podName:7daab145-3025-4d93-bb61-8921bd849a13 nodeName:}" failed. No retries permitted until 2026-01-21 09:42:41.001053827 +0000 UTC m=+1490.501880916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7daab145-3025-4d93-bb61-8921bd849a13") : secret "default-prometheus-proxy-tls" not found Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.501532 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.501832 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.502137 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.502171 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7daab145-3025-4d93-bb61-8921bd849a13-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.507557 5113 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.507614 5113 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-5482be5f-4772-460a-8d5b-97f3027f321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5482be5f-4772-460a-8d5b-97f3027f321a\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29532ac3d712647c044f4edcd2cd48fc8466d853c1a966beea8bb4ed08ea312b/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.508037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.511707 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-tls-assets\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.519958 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7daab145-3025-4d93-bb61-8921bd849a13-config-out\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.520300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-web-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.529684 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-config\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.529704 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjw6\" (UniqueName: \"kubernetes.io/projected/7daab145-3025-4d93-bb61-8921bd849a13-kube-api-access-xjjw6\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:40 crc kubenswrapper[5113]: I0121 09:42:40.543178 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-5482be5f-4772-460a-8d5b-97f3027f321a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5482be5f-4772-460a-8d5b-97f3027f321a\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:41 crc kubenswrapper[5113]: I0121 09:42:41.027198 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:41 crc kubenswrapper[5113]: I0121 09:42:41.035156 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7daab145-3025-4d93-bb61-8921bd849a13-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7daab145-3025-4d93-bb61-8921bd849a13\") " pod="service-telemetry/prometheus-default-0" Jan 21 09:42:41 crc kubenswrapper[5113]: I0121 09:42:41.192062 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 21 09:42:41 crc kubenswrapper[5113]: I0121 09:42:41.423622 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 21 09:42:42 crc kubenswrapper[5113]: I0121 09:42:42.032512 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerStarted","Data":"816a4d053f6d8cfedd48c94cf20850fee8c221c283b4b090ae8f7a0e439b58aa"} Jan 21 09:42:46 crc kubenswrapper[5113]: I0121 09:42:46.066196 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerStarted","Data":"93eac72e45436e36ced3331e14768938346812e1ec7698b037ec9b5381f87871"} Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.048197 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-nvtcp"] Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.060893 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-nvtcp"] Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.061034 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.164596 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27s69\" (UniqueName: \"kubernetes.io/projected/904fae67-943b-4c4e-b2a9-969896ca1635-kube-api-access-27s69\") pod \"default-snmp-webhook-694dc457d5-nvtcp\" (UID: \"904fae67-943b-4c4e-b2a9-969896ca1635\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.266314 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27s69\" (UniqueName: \"kubernetes.io/projected/904fae67-943b-4c4e-b2a9-969896ca1635-kube-api-access-27s69\") pod \"default-snmp-webhook-694dc457d5-nvtcp\" (UID: \"904fae67-943b-4c4e-b2a9-969896ca1635\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.296092 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27s69\" (UniqueName: \"kubernetes.io/projected/904fae67-943b-4c4e-b2a9-969896ca1635-kube-api-access-27s69\") pod \"default-snmp-webhook-694dc457d5-nvtcp\" (UID: \"904fae67-943b-4c4e-b2a9-969896ca1635\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.377298 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" Jan 21 09:42:50 crc kubenswrapper[5113]: I0121 09:42:50.621576 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-nvtcp"] Jan 21 09:42:51 crc kubenswrapper[5113]: I0121 09:42:51.102218 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" event={"ID":"904fae67-943b-4c4e-b2a9-969896ca1635","Type":"ContainerStarted","Data":"bc72388dc451e02bcd45e2c624e0a898245097db11313ebed86a7a7ddab04a09"} Jan 21 09:42:51 crc kubenswrapper[5113]: I0121 09:42:51.478165 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:42:51 crc kubenswrapper[5113]: I0121 09:42:51.487159 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:42:51 crc kubenswrapper[5113]: I0121 09:42:51.493003 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:42:51 crc kubenswrapper[5113]: I0121 09:42:51.504700 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.661363 5113 scope.go:117] "RemoveContainer" containerID="88183488f1c50d70a02d6077602af648e12477a7713aa8aba23a12016987aed0" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.875503 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.889070 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.890881 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.892787 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.892800 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.892841 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.892936 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-58rtp\"" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.895220 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 21 09:42:53 crc kubenswrapper[5113]: I0121 09:42:53.895235 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032512 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-tls-assets\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032587 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-config-volume\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032616 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfxg\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-kube-api-access-4pfxg\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032656 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032674 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c54c1065-fd71-4792-95c5-555b4af863c4-config-out\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.032886 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.033019 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.033075 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.033139 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-web-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.121577 5113 generic.go:358] "Generic (PLEG): container finished" podID="7daab145-3025-4d93-bb61-8921bd849a13" containerID="93eac72e45436e36ced3331e14768938346812e1ec7698b037ec9b5381f87871" exitCode=0 Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.121657 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerDied","Data":"93eac72e45436e36ced3331e14768938346812e1ec7698b037ec9b5381f87871"} Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134300 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134331 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-web-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134364 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-tls-assets\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134391 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-config-volume\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134411 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4pfxg\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-kube-api-access-4pfxg\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134448 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134479 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c54c1065-fd71-4792-95c5-555b4af863c4-config-out\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.134514 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: E0121 09:42:54.134608 5113 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:54 crc kubenswrapper[5113]: E0121 09:42:54.134661 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls podName:c54c1065-fd71-4792-95c5-555b4af863c4 nodeName:}" failed. No retries permitted until 2026-01-21 09:42:54.63464505 +0000 UTC m=+1504.135472099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "c54c1065-fd71-4792-95c5-555b4af863c4") : secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.141181 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-tls-assets\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.141185 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-web-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.143156 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.143725 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c54c1065-fd71-4792-95c5-555b4af863c4-config-out\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.144197 5113 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.144229 5113 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c651958050471357d3f9037195b0abc985df8b441aa37ad4fb27454fb92b85df/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.147560 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.153256 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-config-volume\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.162401 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pfxg\" (UniqueName: \"kubernetes.io/projected/c54c1065-fd71-4792-95c5-555b4af863c4-kube-api-access-4pfxg\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.179156 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38369ce3-c291-478a-9b57-06ecf6a31e71\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: I0121 09:42:54.644768 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:54 crc kubenswrapper[5113]: E0121 09:42:54.644966 5113 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:54 crc kubenswrapper[5113]: E0121 09:42:54.645038 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls podName:c54c1065-fd71-4792-95c5-555b4af863c4 nodeName:}" failed. No retries permitted until 2026-01-21 09:42:55.645018357 +0000 UTC m=+1505.145845416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "c54c1065-fd71-4792-95c5-555b4af863c4") : secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:55 crc kubenswrapper[5113]: I0121 09:42:55.659628 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:55 crc kubenswrapper[5113]: E0121 09:42:55.660009 5113 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:55 crc kubenswrapper[5113]: E0121 09:42:55.660110 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls podName:c54c1065-fd71-4792-95c5-555b4af863c4 nodeName:}" failed. No retries permitted until 2026-01-21 09:42:57.660087849 +0000 UTC m=+1507.160914898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "c54c1065-fd71-4792-95c5-555b4af863c4") : secret "default-alertmanager-proxy-tls" not found Jan 21 09:42:57 crc kubenswrapper[5113]: I0121 09:42:57.698784 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:57 crc kubenswrapper[5113]: I0121 09:42:57.723352 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c54c1065-fd71-4792-95c5-555b4af863c4-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"c54c1065-fd71-4792-95c5-555b4af863c4\") " pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:57 crc kubenswrapper[5113]: I0121 09:42:57.845700 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 21 09:42:58 crc kubenswrapper[5113]: I0121 09:42:58.339657 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:42:58 crc kubenswrapper[5113]: I0121 09:42:58.339784 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:42:58 crc kubenswrapper[5113]: I0121 09:42:58.669272 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 21 09:42:59 crc kubenswrapper[5113]: I0121 09:42:59.176274 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" event={"ID":"904fae67-943b-4c4e-b2a9-969896ca1635","Type":"ContainerStarted","Data":"07d474f75a45f44d9e64617aec9c3f088f44e093b907d6a671f491dad1c3771b"} Jan 21 09:42:59 crc kubenswrapper[5113]: I0121 09:42:59.180986 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerStarted","Data":"cb2fb894a69e0c3803ab8e1d5aa50b579ec9643dd747621e96704091fea71a93"} Jan 21 09:42:59 crc kubenswrapper[5113]: I0121 09:42:59.201962 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-nvtcp" podStartSLOduration=1.206386642 podStartE2EDuration="9.201936995s" podCreationTimestamp="2026-01-21 09:42:50 +0000 UTC" firstStartedPulling="2026-01-21 09:42:50.632502788 +0000 UTC m=+1500.133329837" lastFinishedPulling="2026-01-21 09:42:58.628053141 +0000 UTC m=+1508.128880190" observedRunningTime="2026-01-21 09:42:59.198399605 +0000 UTC m=+1508.699226654" watchObservedRunningTime="2026-01-21 09:42:59.201936995 +0000 UTC m=+1508.702764044" Jan 21 09:43:02 crc kubenswrapper[5113]: I0121 09:43:02.204654 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerStarted","Data":"29cffc7cfd574360056f06de157b4944cad08570f0cf4b0ca1a18a7a6fa32f9d"} Jan 21 09:43:04 crc kubenswrapper[5113]: I0121 09:43:04.230206 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerStarted","Data":"46b6261006e8e6a2795c0f21ce6b366861f64f74e172cc8f2b0b36389ba6f79d"} Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.145177 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst"] Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.157019 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.164146 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst"] Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.168022 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.168236 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.168432 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.168599 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-2tsqb\"" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.258024 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerStarted","Data":"034db0d381ae4b788674682b7bb58b9392f5427628e62f71b3a1ef95bc515375"} Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.304518 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.304629 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/44130ffd-90a2-4b98-b98a-28c10f45a9ca-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.304692 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.304855 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpzp\" (UniqueName: \"kubernetes.io/projected/44130ffd-90a2-4b98-b98a-28c10f45a9ca-kube-api-access-qqpzp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.304895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/44130ffd-90a2-4b98-b98a-28c10f45a9ca-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.405843 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqpzp\" (UniqueName: \"kubernetes.io/projected/44130ffd-90a2-4b98-b98a-28c10f45a9ca-kube-api-access-qqpzp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.405888 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/44130ffd-90a2-4b98-b98a-28c10f45a9ca-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.405932 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.406862 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/44130ffd-90a2-4b98-b98a-28c10f45a9ca-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.406875 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/44130ffd-90a2-4b98-b98a-28c10f45a9ca-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.406894 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: E0121 09:43:07.406975 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 09:43:07 crc kubenswrapper[5113]: E0121 09:43:07.408415 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls podName:44130ffd-90a2-4b98-b98a-28c10f45a9ca nodeName:}" failed. No retries permitted until 2026-01-21 09:43:07.908395833 +0000 UTC m=+1517.409222882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" (UID: "44130ffd-90a2-4b98-b98a-28c10f45a9ca") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.407189 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/44130ffd-90a2-4b98-b98a-28c10f45a9ca-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.412484 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.424576 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqpzp\" (UniqueName: \"kubernetes.io/projected/44130ffd-90a2-4b98-b98a-28c10f45a9ca-kube-api-access-qqpzp\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: I0121 09:43:07.918658 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:07 crc kubenswrapper[5113]: E0121 09:43:07.918875 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 09:43:07 crc kubenswrapper[5113]: E0121 09:43:07.919181 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls podName:44130ffd-90a2-4b98-b98a-28c10f45a9ca nodeName:}" failed. No retries permitted until 2026-01-21 09:43:08.919159551 +0000 UTC m=+1518.419986590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" (UID: "44130ffd-90a2-4b98-b98a-28c10f45a9ca") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 21 09:43:08 crc kubenswrapper[5113]: I0121 09:43:08.931914 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:08 crc kubenswrapper[5113]: I0121 09:43:08.936307 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/44130ffd-90a2-4b98-b98a-28c10f45a9ca-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst\" (UID: \"44130ffd-90a2-4b98-b98a-28c10f45a9ca\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:08 crc kubenswrapper[5113]: I0121 09:43:08.982978 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" Jan 21 09:43:09 crc kubenswrapper[5113]: I0121 09:43:09.279543 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst"] Jan 21 09:43:09 crc kubenswrapper[5113]: I0121 09:43:09.281258 5113 generic.go:358] "Generic (PLEG): container finished" podID="c54c1065-fd71-4792-95c5-555b4af863c4" containerID="29cffc7cfd574360056f06de157b4944cad08570f0cf4b0ca1a18a7a6fa32f9d" exitCode=0 Jan 21 09:43:09 crc kubenswrapper[5113]: I0121 09:43:09.281473 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerDied","Data":"29cffc7cfd574360056f06de157b4944cad08570f0cf4b0ca1a18a7a6fa32f9d"} Jan 21 09:43:09 crc kubenswrapper[5113]: W0121 09:43:09.289079 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44130ffd_90a2_4b98_b98a_28c10f45a9ca.slice/crio-f645727d0e845602e45e09cacc7fe7ef01df2dd5bd8d049ec83d2ebb55faec64 WatchSource:0}: Error finding container f645727d0e845602e45e09cacc7fe7ef01df2dd5bd8d049ec83d2ebb55faec64: Status 404 returned error can't find the container with id f645727d0e845602e45e09cacc7fe7ef01df2dd5bd8d049ec83d2ebb55faec64 Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.292832 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"f645727d0e845602e45e09cacc7fe7ef01df2dd5bd8d049ec83d2ebb55faec64"} Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.730683 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq"] Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.740511 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: E0121 09:43:10.744010 5113 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"default-cloud1-ceil-meter-sg-core-configmap\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"service-telemetry\": no relationship found between node 'crc' and this object" logger="UnhandledError" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" type="*v1.ConfigMap" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.746200 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.772411 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq"] Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.787593 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.787684 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa026e59-d4cb-4580-b908-a08b359465a2-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.787824 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.787869 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88rt\" (UniqueName: \"kubernetes.io/projected/aa026e59-d4cb-4580-b908-a08b359465a2-kube-api-access-d88rt\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.787927 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa026e59-d4cb-4580-b908-a08b359465a2-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.889408 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.889477 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa026e59-d4cb-4580-b908-a08b359465a2-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.889521 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.889557 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d88rt\" (UniqueName: \"kubernetes.io/projected/aa026e59-d4cb-4580-b908-a08b359465a2-kube-api-access-d88rt\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.889602 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa026e59-d4cb-4580-b908-a08b359465a2-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: E0121 09:43:10.889841 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 09:43:10 crc kubenswrapper[5113]: E0121 09:43:10.889909 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls podName:aa026e59-d4cb-4580-b908-a08b359465a2 nodeName:}" failed. No retries permitted until 2026-01-21 09:43:11.389889011 +0000 UTC m=+1520.890716060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" (UID: "aa026e59-d4cb-4580-b908-a08b359465a2") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.890304 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/aa026e59-d4cb-4580-b908-a08b359465a2-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.902661 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:10 crc kubenswrapper[5113]: I0121 09:43:10.914585 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88rt\" (UniqueName: \"kubernetes.io/projected/aa026e59-d4cb-4580-b908-a08b359465a2-kube-api-access-d88rt\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:11 crc kubenswrapper[5113]: I0121 09:43:11.396077 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:11 crc kubenswrapper[5113]: E0121 09:43:11.396304 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 09:43:11 crc kubenswrapper[5113]: E0121 09:43:11.396360 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls podName:aa026e59-d4cb-4580-b908-a08b359465a2 nodeName:}" failed. No retries permitted until 2026-01-21 09:43:12.396343037 +0000 UTC m=+1521.897170086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" (UID: "aa026e59-d4cb-4580-b908-a08b359465a2") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 21 09:43:11 crc kubenswrapper[5113]: I0121 09:43:11.819173 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 21 09:43:11 crc kubenswrapper[5113]: I0121 09:43:11.821269 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/aa026e59-d4cb-4580-b908-a08b359465a2-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:12 crc kubenswrapper[5113]: I0121 09:43:12.410044 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:12 crc kubenswrapper[5113]: I0121 09:43:12.416518 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/aa026e59-d4cb-4580-b908-a08b359465a2-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq\" (UID: \"aa026e59-d4cb-4580-b908-a08b359465a2\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:12 crc kubenswrapper[5113]: I0121 09:43:12.561696 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.497773 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z"] Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.538896 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z"] Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.539041 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.541611 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.545069 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.641081 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ecf101f2-0ba8-4f13-8b28-25f3102f6907-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.641130 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.641294 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecf101f2-0ba8-4f13-8b28-25f3102f6907-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.641345 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2j7\" (UniqueName: \"kubernetes.io/projected/ecf101f2-0ba8-4f13-8b28-25f3102f6907-kube-api-access-8k2j7\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.641550 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743225 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecf101f2-0ba8-4f13-8b28-25f3102f6907-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743282 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2j7\" (UniqueName: \"kubernetes.io/projected/ecf101f2-0ba8-4f13-8b28-25f3102f6907-kube-api-access-8k2j7\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743331 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743369 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ecf101f2-0ba8-4f13-8b28-25f3102f6907-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743416 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: E0121 09:43:14.743564 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 09:43:14 crc kubenswrapper[5113]: E0121 09:43:14.743623 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls podName:ecf101f2-0ba8-4f13-8b28-25f3102f6907 nodeName:}" failed. No retries permitted until 2026-01-21 09:43:15.243604368 +0000 UTC m=+1524.744431417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" (UID: "ecf101f2-0ba8-4f13-8b28-25f3102f6907") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.743788 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ecf101f2-0ba8-4f13-8b28-25f3102f6907-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.744272 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/ecf101f2-0ba8-4f13-8b28-25f3102f6907-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.756465 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:14 crc kubenswrapper[5113]: I0121 09:43:14.760485 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2j7\" (UniqueName: \"kubernetes.io/projected/ecf101f2-0ba8-4f13-8b28-25f3102f6907-kube-api-access-8k2j7\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:15 crc kubenswrapper[5113]: I0121 09:43:15.261765 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:15 crc kubenswrapper[5113]: E0121 09:43:15.262694 5113 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 09:43:15 crc kubenswrapper[5113]: E0121 09:43:15.262769 5113 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls podName:ecf101f2-0ba8-4f13-8b28-25f3102f6907 nodeName:}" failed. No retries permitted until 2026-01-21 09:43:16.262752233 +0000 UTC m=+1525.763579282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" (UID: "ecf101f2-0ba8-4f13-8b28-25f3102f6907") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 21 09:43:15 crc kubenswrapper[5113]: I0121 09:43:15.337984 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7daab145-3025-4d93-bb61-8921bd849a13","Type":"ContainerStarted","Data":"d506f75f53583f38aa6d147ba3ed3c7b9fcb977ad33080e97267fa6ffd0b9d04"} Jan 21 09:43:15 crc kubenswrapper[5113]: I0121 09:43:15.340232 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"399cb04f0dc0c4e9a88aa2728bed246a4574fbe0dfbf2bbc09a4e112d12d763f"} Jan 21 09:43:15 crc kubenswrapper[5113]: I0121 09:43:15.364159 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.686016596 podStartE2EDuration="37.364141371s" podCreationTimestamp="2026-01-21 09:42:38 +0000 UTC" firstStartedPulling="2026-01-21 09:42:41.441845919 +0000 UTC m=+1490.942672968" lastFinishedPulling="2026-01-21 09:43:15.119970694 +0000 UTC m=+1524.620797743" observedRunningTime="2026-01-21 09:43:15.361607059 +0000 UTC m=+1524.862434108" watchObservedRunningTime="2026-01-21 09:43:15.364141371 +0000 UTC m=+1524.864968420" Jan 21 09:43:15 crc kubenswrapper[5113]: I0121 09:43:15.498992 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq"] Jan 21 09:43:15 crc kubenswrapper[5113]: W0121 09:43:15.577780 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa026e59_d4cb_4580_b908_a08b359465a2.slice/crio-8d8a86cebba98a44bcd72ae131da6441f4ff4c7c7e47c329ee84abc8f55f9cd2 WatchSource:0}: Error finding container 8d8a86cebba98a44bcd72ae131da6441f4ff4c7c7e47c329ee84abc8f55f9cd2: Status 404 returned error can't find the container with id 8d8a86cebba98a44bcd72ae131da6441f4ff4c7c7e47c329ee84abc8f55f9cd2 Jan 21 09:43:16 crc kubenswrapper[5113]: I0121 09:43:16.192685 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 21 09:43:16 crc kubenswrapper[5113]: I0121 09:43:16.278845 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:16 crc kubenswrapper[5113]: I0121 09:43:16.298300 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecf101f2-0ba8-4f13-8b28-25f3102f6907-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z\" (UID: \"ecf101f2-0ba8-4f13-8b28-25f3102f6907\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:16 crc kubenswrapper[5113]: I0121 09:43:16.348810 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"8d8a86cebba98a44bcd72ae131da6441f4ff4c7c7e47c329ee84abc8f55f9cd2"} Jan 21 09:43:16 crc kubenswrapper[5113]: I0121 09:43:16.361842 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" Jan 21 09:43:17 crc kubenswrapper[5113]: I0121 09:43:17.707233 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z"] Jan 21 09:43:17 crc kubenswrapper[5113]: W0121 09:43:17.714477 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecf101f2_0ba8_4f13_8b28_25f3102f6907.slice/crio-1cb386200a47f6fc6354a14509ef0ddf30803adebc4ff0e7f1c0b79f71207049 WatchSource:0}: Error finding container 1cb386200a47f6fc6354a14509ef0ddf30803adebc4ff0e7f1c0b79f71207049: Status 404 returned error can't find the container with id 1cb386200a47f6fc6354a14509ef0ddf30803adebc4ff0e7f1c0b79f71207049 Jan 21 09:43:18 crc kubenswrapper[5113]: I0121 09:43:18.378503 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerStarted","Data":"e8ae03abee188d1289948f637bf60b806fe4879138ce263d199472faad5a7758"} Jan 21 09:43:18 crc kubenswrapper[5113]: I0121 09:43:18.380848 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"a496df424f0207d4250013c0040f76aa9660bad18a42e0284c6bab06a60c0f3b"} Jan 21 09:43:18 crc kubenswrapper[5113]: I0121 09:43:18.383341 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"ce2374ee4d90d94ae6f1b503929052fecf0e17986648abc4c10e2e2bbe55f0ef"} Jan 21 09:43:18 crc kubenswrapper[5113]: I0121 09:43:18.383369 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"8457412b536952997ea3afd1a608841bc7773622eb2fcf802063aefc66974dcd"} Jan 21 09:43:18 crc kubenswrapper[5113]: I0121 09:43:18.385527 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"1cb386200a47f6fc6354a14509ef0ddf30803adebc4ff0e7f1c0b79f71207049"} Jan 21 09:43:19 crc kubenswrapper[5113]: I0121 09:43:19.397684 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"2987f5ef345b0fd5d70606da5f8b323d2e579e5bae8bb8b919585aaa37454865"} Jan 21 09:43:19 crc kubenswrapper[5113]: I0121 09:43:19.398299 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"fc7a33cefa79aebaaa6efebd662adc71594a81b6fbd46a281da53c93d6c37312"} Jan 21 09:43:20 crc kubenswrapper[5113]: I0121 09:43:20.420998 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerStarted","Data":"28cadea8b0f875519b4eaeccaad6540f9ee684bd828bca3a7960b30848442224"} Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.592444 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct"] Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.728539 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct"] Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.728677 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.732163 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.737826 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.901297 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4cc32795-d962-403a-a4b0-b05770e77786-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.901359 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4cc32795-d962-403a-a4b0-b05770e77786-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.901385 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4cc32795-d962-403a-a4b0-b05770e77786-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:21 crc kubenswrapper[5113]: I0121 09:43:21.901585 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-556k6\" (UniqueName: \"kubernetes.io/projected/4cc32795-d962-403a-a4b0-b05770e77786-kube-api-access-556k6\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.003035 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4cc32795-d962-403a-a4b0-b05770e77786-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.003308 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4cc32795-d962-403a-a4b0-b05770e77786-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.003386 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4cc32795-d962-403a-a4b0-b05770e77786-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.003538 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-556k6\" (UniqueName: \"kubernetes.io/projected/4cc32795-d962-403a-a4b0-b05770e77786-kube-api-access-556k6\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.003654 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4cc32795-d962-403a-a4b0-b05770e77786-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.005320 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4cc32795-d962-403a-a4b0-b05770e77786-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.019976 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4cc32795-d962-403a-a4b0-b05770e77786-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.025306 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-556k6\" (UniqueName: \"kubernetes.io/projected/4cc32795-d962-403a-a4b0-b05770e77786-kube-api-access-556k6\") pod \"default-cloud1-coll-event-smartgateway-585497b7f5-km7ct\" (UID: \"4cc32795-d962-403a-a4b0-b05770e77786\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:22 crc kubenswrapper[5113]: I0121 09:43:22.051092 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.005498 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch"] Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.013669 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.014552 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch"] Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.017174 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.041277 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d650442-026a-4ce5-9d25-8354edb3df27-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.041408 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d650442-026a-4ce5-9d25-8354edb3df27-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.041455 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmcj\" (UniqueName: \"kubernetes.io/projected/8d650442-026a-4ce5-9d25-8354edb3df27-kube-api-access-wsmcj\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.041482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d650442-026a-4ce5-9d25-8354edb3df27-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.142265 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmcj\" (UniqueName: \"kubernetes.io/projected/8d650442-026a-4ce5-9d25-8354edb3df27-kube-api-access-wsmcj\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.142330 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d650442-026a-4ce5-9d25-8354edb3df27-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.142386 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d650442-026a-4ce5-9d25-8354edb3df27-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.142491 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d650442-026a-4ce5-9d25-8354edb3df27-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.143065 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8d650442-026a-4ce5-9d25-8354edb3df27-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.143336 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8d650442-026a-4ce5-9d25-8354edb3df27-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.147650 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/8d650442-026a-4ce5-9d25-8354edb3df27-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.158019 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmcj\" (UniqueName: \"kubernetes.io/projected/8d650442-026a-4ce5-9d25-8354edb3df27-kube-api-access-wsmcj\") pod \"default-cloud1-ceil-event-smartgateway-b67f79854-pcpch\" (UID: \"8d650442-026a-4ce5-9d25-8354edb3df27\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:23 crc kubenswrapper[5113]: I0121 09:43:23.364435 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.040207 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch"] Jan 21 09:43:25 crc kubenswrapper[5113]: W0121 09:43:25.042261 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d650442_026a_4ce5_9d25_8354edb3df27.slice/crio-14f9ca1afdee628bdc5688c12af6c0cfead76a49b929e44ca322299574717da4 WatchSource:0}: Error finding container 14f9ca1afdee628bdc5688c12af6c0cfead76a49b929e44ca322299574717da4: Status 404 returned error can't find the container with id 14f9ca1afdee628bdc5688c12af6c0cfead76a49b929e44ca322299574717da4 Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.115065 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct"] Jan 21 09:43:25 crc kubenswrapper[5113]: W0121 09:43:25.118043 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cc32795_d962_403a_a4b0_b05770e77786.slice/crio-b2209f856278a9ad979859459ca64750b47707cdbd9b1dc2cef275be9a0cf376 WatchSource:0}: Error finding container b2209f856278a9ad979859459ca64750b47707cdbd9b1dc2cef275be9a0cf376: Status 404 returned error can't find the container with id b2209f856278a9ad979859459ca64750b47707cdbd9b1dc2cef275be9a0cf376 Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.461857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"3b6df3b43257d2ac4193397bd2c58ce8f5b8dd3f78057d741c89a3497dcada4d"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.465536 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"37177fb29a00f93fbc5e2ea69ffa8ce93d313fa0498a257b8035392b6a5cccd7"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.471289 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"40d8a8df4bdcd76fca7526721d8b4ca509cec49c550caf85046145e4043ad293"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.475175 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerStarted","Data":"90e0232482e323bafa1f0a7246517df6254540529ee6274066fa7ef7eea73b30"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.475220 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerStarted","Data":"98208026d40ae892ae80c0acd476cef481b3602353a576ca0394d99a6745e590"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.475231 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerStarted","Data":"14f9ca1afdee628bdc5688c12af6c0cfead76a49b929e44ca322299574717da4"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.486314 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" podStartSLOduration=3.026471628 podStartE2EDuration="18.486295708s" podCreationTimestamp="2026-01-21 09:43:07 +0000 UTC" firstStartedPulling="2026-01-21 09:43:09.291151489 +0000 UTC m=+1518.791978528" lastFinishedPulling="2026-01-21 09:43:24.750975559 +0000 UTC m=+1534.251802608" observedRunningTime="2026-01-21 09:43:25.476237053 +0000 UTC m=+1534.977064112" watchObservedRunningTime="2026-01-21 09:43:25.486295708 +0000 UTC m=+1534.987122757" Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.501901 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"c54c1065-fd71-4792-95c5-555b4af863c4","Type":"ContainerStarted","Data":"9b741808ae0f4532f044b89123d38340cdcc993b04df42eb978e38cb463e9f09"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.503782 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" podStartSLOduration=6.381383193 podStartE2EDuration="15.503766382s" podCreationTimestamp="2026-01-21 09:43:10 +0000 UTC" firstStartedPulling="2026-01-21 09:43:15.58261305 +0000 UTC m=+1525.083440099" lastFinishedPulling="2026-01-21 09:43:24.704996239 +0000 UTC m=+1534.205823288" observedRunningTime="2026-01-21 09:43:25.502096985 +0000 UTC m=+1535.002924034" watchObservedRunningTime="2026-01-21 09:43:25.503766382 +0000 UTC m=+1535.004593431" Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.507617 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerStarted","Data":"8d29966fed77c872492593466b08d90bcdc806a596674d7f1c873221d964e1c0"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.507752 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerStarted","Data":"b2209f856278a9ad979859459ca64750b47707cdbd9b1dc2cef275be9a0cf376"} Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.557344 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" podStartSLOduration=4.595578016 podStartE2EDuration="11.557322187s" podCreationTimestamp="2026-01-21 09:43:14 +0000 UTC" firstStartedPulling="2026-01-21 09:43:17.720785382 +0000 UTC m=+1527.221612431" lastFinishedPulling="2026-01-21 09:43:24.682529553 +0000 UTC m=+1534.183356602" observedRunningTime="2026-01-21 09:43:25.542336293 +0000 UTC m=+1535.043163342" watchObservedRunningTime="2026-01-21 09:43:25.557322187 +0000 UTC m=+1535.058149236" Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.567342 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" podStartSLOduration=3.299448273 podStartE2EDuration="3.56732284s" podCreationTimestamp="2026-01-21 09:43:22 +0000 UTC" firstStartedPulling="2026-01-21 09:43:25.043659078 +0000 UTC m=+1534.544486127" lastFinishedPulling="2026-01-21 09:43:25.311533635 +0000 UTC m=+1534.812360694" observedRunningTime="2026-01-21 09:43:25.561974579 +0000 UTC m=+1535.062801628" watchObservedRunningTime="2026-01-21 09:43:25.56732284 +0000 UTC m=+1535.068149889" Jan 21 09:43:25 crc kubenswrapper[5113]: I0121 09:43:25.610853 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=18.221992689 podStartE2EDuration="33.610834191s" podCreationTimestamp="2026-01-21 09:42:52 +0000 UTC" firstStartedPulling="2026-01-21 09:43:09.282543766 +0000 UTC m=+1518.783370815" lastFinishedPulling="2026-01-21 09:43:24.671385268 +0000 UTC m=+1534.172212317" observedRunningTime="2026-01-21 09:43:25.610277055 +0000 UTC m=+1535.111104104" watchObservedRunningTime="2026-01-21 09:43:25.610834191 +0000 UTC m=+1535.111661240" Jan 21 09:43:26 crc kubenswrapper[5113]: I0121 09:43:26.193162 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 21 09:43:26 crc kubenswrapper[5113]: I0121 09:43:26.225708 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 21 09:43:26 crc kubenswrapper[5113]: I0121 09:43:26.515561 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerStarted","Data":"ed3b870d46948e1f2cb7b470819e39d6abcc67ddb8d2ea1bbcc1506ea33b147e"} Jan 21 09:43:26 crc kubenswrapper[5113]: I0121 09:43:26.536600 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" podStartSLOduration=5.234115361 podStartE2EDuration="5.536580797s" podCreationTimestamp="2026-01-21 09:43:21 +0000 UTC" firstStartedPulling="2026-01-21 09:43:25.118848764 +0000 UTC m=+1534.619675813" lastFinishedPulling="2026-01-21 09:43:25.42131421 +0000 UTC m=+1534.922141249" observedRunningTime="2026-01-21 09:43:26.534603151 +0000 UTC m=+1536.035430200" watchObservedRunningTime="2026-01-21 09:43:26.536580797 +0000 UTC m=+1536.037407846" Jan 21 09:43:26 crc kubenswrapper[5113]: I0121 09:43:26.572933 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 21 09:43:28 crc kubenswrapper[5113]: I0121 09:43:28.340317 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:43:28 crc kubenswrapper[5113]: I0121 09:43:28.341188 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:43:28 crc kubenswrapper[5113]: I0121 09:43:28.341396 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:43:28 crc kubenswrapper[5113]: I0121 09:43:28.342067 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:43:28 crc kubenswrapper[5113]: I0121 09:43:28.342220 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" gracePeriod=600 Jan 21 09:43:30 crc kubenswrapper[5113]: I0121 09:43:30.924943 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" exitCode=0 Jan 21 09:43:30 crc kubenswrapper[5113]: I0121 09:43:30.925012 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf"} Jan 21 09:43:30 crc kubenswrapper[5113]: I0121 09:43:30.925631 5113 scope.go:117] "RemoveContainer" containerID="21cff32383aa2d9d302ef8effdf45aa80c8179b1a391761f749d397c6c018756" Jan 21 09:43:34 crc kubenswrapper[5113]: E0121 09:43:34.384691 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:43:35 crc kubenswrapper[5113]: I0121 09:43:35.254470 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:43:35 crc kubenswrapper[5113]: E0121 09:43:35.254968 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.344931 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.345415 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" podUID="e7103133-6521-4b0f-a3ca-068d626b27d5" containerName="default-interconnect" containerID="cri-o://20fbeecef7e154783d70d6ad56ce965f7c272e9c9ec3451c31845496ff8b5eb6" gracePeriod=30 Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.611332 5113 generic.go:358] "Generic (PLEG): container finished" podID="e7103133-6521-4b0f-a3ca-068d626b27d5" containerID="20fbeecef7e154783d70d6ad56ce965f7c272e9c9ec3451c31845496ff8b5eb6" exitCode=0 Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.617274 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" event={"ID":"e7103133-6521-4b0f-a3ca-068d626b27d5","Type":"ContainerDied","Data":"20fbeecef7e154783d70d6ad56ce965f7c272e9c9ec3451c31845496ff8b5eb6"} Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.619463 5113 generic.go:358] "Generic (PLEG): container finished" podID="4cc32795-d962-403a-a4b0-b05770e77786" containerID="8d29966fed77c872492593466b08d90bcdc806a596674d7f1c873221d964e1c0" exitCode=0 Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.619616 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerDied","Data":"8d29966fed77c872492593466b08d90bcdc806a596674d7f1c873221d964e1c0"} Jan 21 09:43:40 crc kubenswrapper[5113]: I0121 09:43:40.620183 5113 scope.go:117] "RemoveContainer" containerID="8d29966fed77c872492593466b08d90bcdc806a596674d7f1c873221d964e1c0" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.235377 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304147 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304515 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304565 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304581 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304605 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304680 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.304700 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vcn\" (UniqueName: \"kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn\") pod \"e7103133-6521-4b0f-a3ca-068d626b27d5\" (UID: \"e7103133-6521-4b0f-a3ca-068d626b27d5\") " Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.310570 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.310753 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.317541 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fqvvc"] Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.318271 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7103133-6521-4b0f-a3ca-068d626b27d5" containerName="default-interconnect" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.318291 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7103133-6521-4b0f-a3ca-068d626b27d5" containerName="default-interconnect" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.318526 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7103133-6521-4b0f-a3ca-068d626b27d5" containerName="default-interconnect" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.318711 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.319229 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn" (OuterVolumeSpecName: "kube-api-access-m8vcn") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "kube-api-access-m8vcn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.319476 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.321538 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.321720 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "e7103133-6521-4b0f-a3ca-068d626b27d5" (UID: "e7103133-6521-4b0f-a3ca-068d626b27d5"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.322183 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.331899 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fqvvc"] Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406273 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406370 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406417 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406460 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406492 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406510 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406530 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85q2q\" (UniqueName: \"kubernetes.io/projected/a845c762-53f6-44eb-9c7a-31755d333fe4-kube-api-access-85q2q\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406587 5113 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406608 5113 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406619 5113 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406628 5113 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406636 5113 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406647 5113 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e7103133-6521-4b0f-a3ca-068d626b27d5-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.406655 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m8vcn\" (UniqueName: \"kubernetes.io/projected/e7103133-6521-4b0f-a3ca-068d626b27d5-kube-api-access-m8vcn\") on node \"crc\" DevicePath \"\"" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.508437 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.508753 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.508878 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.509014 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-85q2q\" (UniqueName: \"kubernetes.io/projected/a845c762-53f6-44eb-9c7a-31755d333fe4-kube-api-access-85q2q\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.509185 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.509297 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.509378 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.510650 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-config\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.516246 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.516370 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-sasl-users\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.516479 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.517057 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.517109 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a845c762-53f6-44eb-9c7a-31755d333fe4-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.533328 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-85q2q\" (UniqueName: \"kubernetes.io/projected/a845c762-53f6-44eb-9c7a-31755d333fe4-kube-api-access-85q2q\") pod \"default-interconnect-55bf8d5cb-fqvvc\" (UID: \"a845c762-53f6-44eb-9c7a-31755d333fe4\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.628027 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.628403 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-zmw4m" event={"ID":"e7103133-6521-4b0f-a3ca-068d626b27d5","Type":"ContainerDied","Data":"5d24566eaf38f451d453f890f873467703703a1b451ea1c4453216b1d397a1b5"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.629018 5113 scope.go:117] "RemoveContainer" containerID="20fbeecef7e154783d70d6ad56ce965f7c272e9c9ec3451c31845496ff8b5eb6" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.634877 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerDied","Data":"a496df424f0207d4250013c0040f76aa9660bad18a42e0284c6bab06a60c0f3b"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.634842 5113 generic.go:358] "Generic (PLEG): container finished" podID="44130ffd-90a2-4b98-b98a-28c10f45a9ca" containerID="a496df424f0207d4250013c0040f76aa9660bad18a42e0284c6bab06a60c0f3b" exitCode=0 Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.635440 5113 scope.go:117] "RemoveContainer" containerID="a496df424f0207d4250013c0040f76aa9660bad18a42e0284c6bab06a60c0f3b" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.639051 5113 generic.go:358] "Generic (PLEG): container finished" podID="aa026e59-d4cb-4580-b908-a08b359465a2" containerID="ce2374ee4d90d94ae6f1b503929052fecf0e17986648abc4c10e2e2bbe55f0ef" exitCode=0 Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.639234 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerDied","Data":"ce2374ee4d90d94ae6f1b503929052fecf0e17986648abc4c10e2e2bbe55f0ef"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.639944 5113 scope.go:117] "RemoveContainer" containerID="ce2374ee4d90d94ae6f1b503929052fecf0e17986648abc4c10e2e2bbe55f0ef" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.645291 5113 generic.go:358] "Generic (PLEG): container finished" podID="ecf101f2-0ba8-4f13-8b28-25f3102f6907" containerID="2987f5ef345b0fd5d70606da5f8b323d2e579e5bae8bb8b919585aaa37454865" exitCode=0 Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.645356 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerDied","Data":"2987f5ef345b0fd5d70606da5f8b323d2e579e5bae8bb8b919585aaa37454865"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.645850 5113 scope.go:117] "RemoveContainer" containerID="2987f5ef345b0fd5d70606da5f8b323d2e579e5bae8bb8b919585aaa37454865" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.649164 5113 generic.go:358] "Generic (PLEG): container finished" podID="8d650442-026a-4ce5-9d25-8354edb3df27" containerID="98208026d40ae892ae80c0acd476cef481b3602353a576ca0394d99a6745e590" exitCode=0 Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.649365 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerDied","Data":"98208026d40ae892ae80c0acd476cef481b3602353a576ca0394d99a6745e590"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.649932 5113 scope.go:117] "RemoveContainer" containerID="98208026d40ae892ae80c0acd476cef481b3602353a576ca0394d99a6745e590" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.654683 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerStarted","Data":"db4f3a164ba5ef247b06752c9abe0fc5c6df6d898c2b960b867af49f16a6d183"} Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.658628 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.741376 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:43:41 crc kubenswrapper[5113]: I0121 09:43:41.747711 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-zmw4m"] Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.241635 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fqvvc"] Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.679277 5113 generic.go:358] "Generic (PLEG): container finished" podID="4cc32795-d962-403a-a4b0-b05770e77786" containerID="db4f3a164ba5ef247b06752c9abe0fc5c6df6d898c2b960b867af49f16a6d183" exitCode=0 Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.679682 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerDied","Data":"db4f3a164ba5ef247b06752c9abe0fc5c6df6d898c2b960b867af49f16a6d183"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.679723 5113 scope.go:117] "RemoveContainer" containerID="8d29966fed77c872492593466b08d90bcdc806a596674d7f1c873221d964e1c0" Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.680455 5113 scope.go:117] "RemoveContainer" containerID="db4f3a164ba5ef247b06752c9abe0fc5c6df6d898c2b960b867af49f16a6d183" Jan 21 09:43:42 crc kubenswrapper[5113]: E0121 09:43:42.680962 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_service-telemetry(4cc32795-d962-403a-a4b0-b05770e77786)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" podUID="4cc32795-d962-403a-a4b0-b05770e77786" Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.685290 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" event={"ID":"a845c762-53f6-44eb-9c7a-31755d333fe4","Type":"ContainerStarted","Data":"c4858ef3ac960214a02774c47c509e21ad1a5eb133c37e9009ac9d95001cbfd8"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.685331 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" event={"ID":"a845c762-53f6-44eb-9c7a-31755d333fe4","Type":"ContainerStarted","Data":"aa2984cf99cade32744b90325fd989d04584445de106f1f4b11b4edfea8b8551"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.700410 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"e0d648e87fee0e395cc1ebdb8e883c3e2c9f75b6da7c8c63c9a451385481aac2"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.714316 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"192490a20cbf60cf817c8f2b62866642b26bd2709c1d2548e6215f7d48544c90"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.723774 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"f79a82752e02a5e238c5d6f01302276eac76d874f075f0c47689605c2d8a3861"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.732648 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerStarted","Data":"4666cc87b1953b92bff01f6a7be1271ca350c67788513c59d59c6b9a02877d97"} Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.749847 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-fqvvc" podStartSLOduration=2.749717705 podStartE2EDuration="2.749717705s" podCreationTimestamp="2026-01-21 09:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 09:43:42.74281624 +0000 UTC m=+1552.243643289" watchObservedRunningTime="2026-01-21 09:43:42.749717705 +0000 UTC m=+1552.250544754" Jan 21 09:43:42 crc kubenswrapper[5113]: I0121 09:43:42.855462 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7103133-6521-4b0f-a3ca-068d626b27d5" path="/var/lib/kubelet/pods/e7103133-6521-4b0f-a3ca-068d626b27d5/volumes" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.586332 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.593060 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.599226 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.600306 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.601085 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.743491 5113 generic.go:358] "Generic (PLEG): container finished" podID="44130ffd-90a2-4b98-b98a-28c10f45a9ca" containerID="e0d648e87fee0e395cc1ebdb8e883c3e2c9f75b6da7c8c63c9a451385481aac2" exitCode=0 Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.743560 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerDied","Data":"e0d648e87fee0e395cc1ebdb8e883c3e2c9f75b6da7c8c63c9a451385481aac2"} Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.743600 5113 scope.go:117] "RemoveContainer" containerID="a496df424f0207d4250013c0040f76aa9660bad18a42e0284c6bab06a60c0f3b" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.744153 5113 scope.go:117] "RemoveContainer" containerID="e0d648e87fee0e395cc1ebdb8e883c3e2c9f75b6da7c8c63c9a451385481aac2" Jan 21 09:43:43 crc kubenswrapper[5113]: E0121 09:43:43.744559 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_service-telemetry(44130ffd-90a2-4b98-b98a-28c10f45a9ca)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" podUID="44130ffd-90a2-4b98-b98a-28c10f45a9ca" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.745886 5113 generic.go:358] "Generic (PLEG): container finished" podID="aa026e59-d4cb-4580-b908-a08b359465a2" containerID="192490a20cbf60cf817c8f2b62866642b26bd2709c1d2548e6215f7d48544c90" exitCode=0 Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.745967 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerDied","Data":"192490a20cbf60cf817c8f2b62866642b26bd2709c1d2548e6215f7d48544c90"} Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.746456 5113 scope.go:117] "RemoveContainer" containerID="192490a20cbf60cf817c8f2b62866642b26bd2709c1d2548e6215f7d48544c90" Jan 21 09:43:43 crc kubenswrapper[5113]: E0121 09:43:43.746701 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_service-telemetry(aa026e59-d4cb-4580-b908-a08b359465a2)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" podUID="aa026e59-d4cb-4580-b908-a08b359465a2" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.748161 5113 generic.go:358] "Generic (PLEG): container finished" podID="ecf101f2-0ba8-4f13-8b28-25f3102f6907" containerID="f79a82752e02a5e238c5d6f01302276eac76d874f075f0c47689605c2d8a3861" exitCode=0 Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.748202 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerDied","Data":"f79a82752e02a5e238c5d6f01302276eac76d874f075f0c47689605c2d8a3861"} Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.748665 5113 scope.go:117] "RemoveContainer" containerID="f79a82752e02a5e238c5d6f01302276eac76d874f075f0c47689605c2d8a3861" Jan 21 09:43:43 crc kubenswrapper[5113]: E0121 09:43:43.748958 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_service-telemetry(ecf101f2-0ba8-4f13-8b28-25f3102f6907)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" podUID="ecf101f2-0ba8-4f13-8b28-25f3102f6907" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.750043 5113 generic.go:358] "Generic (PLEG): container finished" podID="8d650442-026a-4ce5-9d25-8354edb3df27" containerID="4666cc87b1953b92bff01f6a7be1271ca350c67788513c59d59c6b9a02877d97" exitCode=0 Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.750566 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerDied","Data":"4666cc87b1953b92bff01f6a7be1271ca350c67788513c59d59c6b9a02877d97"} Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.750890 5113 scope.go:117] "RemoveContainer" containerID="4666cc87b1953b92bff01f6a7be1271ca350c67788513c59d59c6b9a02877d97" Jan 21 09:43:43 crc kubenswrapper[5113]: E0121 09:43:43.751080 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_service-telemetry(8d650442-026a-4ce5-9d25-8354edb3df27)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" podUID="8d650442-026a-4ce5-9d25-8354edb3df27" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.765691 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkvbv\" (UniqueName: \"kubernetes.io/projected/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-kube-api-access-tkvbv\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.765779 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.765928 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-qdr-test-config\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.790008 5113 scope.go:117] "RemoveContainer" containerID="ce2374ee4d90d94ae6f1b503929052fecf0e17986648abc4c10e2e2bbe55f0ef" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.844101 5113 scope.go:117] "RemoveContainer" containerID="2987f5ef345b0fd5d70606da5f8b323d2e579e5bae8bb8b919585aaa37454865" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.868883 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkvbv\" (UniqueName: \"kubernetes.io/projected/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-kube-api-access-tkvbv\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.869017 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.869158 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-qdr-test-config\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.869965 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-qdr-test-config\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.878406 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.886962 5113 scope.go:117] "RemoveContainer" containerID="98208026d40ae892ae80c0acd476cef481b3602353a576ca0394d99a6745e590" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.891267 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkvbv\" (UniqueName: \"kubernetes.io/projected/c151bef3-8a92-45e0-bfeb-53f319f6d6ce-kube-api-access-tkvbv\") pod \"qdr-test\" (UID: \"c151bef3-8a92-45e0-bfeb-53f319f6d6ce\") " pod="service-telemetry/qdr-test" Jan 21 09:43:43 crc kubenswrapper[5113]: I0121 09:43:43.908084 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 21 09:43:44 crc kubenswrapper[5113]: W0121 09:43:44.335471 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc151bef3_8a92_45e0_bfeb_53f319f6d6ce.slice/crio-ec3b812bd169d4d8b6753b5ed8d286c3a8629c083e76126270d11793fabface6 WatchSource:0}: Error finding container ec3b812bd169d4d8b6753b5ed8d286c3a8629c083e76126270d11793fabface6: Status 404 returned error can't find the container with id ec3b812bd169d4d8b6753b5ed8d286c3a8629c083e76126270d11793fabface6 Jan 21 09:43:44 crc kubenswrapper[5113]: I0121 09:43:44.335652 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 21 09:43:44 crc kubenswrapper[5113]: I0121 09:43:44.756898 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"c151bef3-8a92-45e0-bfeb-53f319f6d6ce","Type":"ContainerStarted","Data":"ec3b812bd169d4d8b6753b5ed8d286c3a8629c083e76126270d11793fabface6"} Jan 21 09:43:46 crc kubenswrapper[5113]: I0121 09:43:46.843335 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:43:46 crc kubenswrapper[5113]: E0121 09:43:46.844052 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:43:52 crc kubenswrapper[5113]: I0121 09:43:52.822910 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"c151bef3-8a92-45e0-bfeb-53f319f6d6ce","Type":"ContainerStarted","Data":"3ce613aabc1f013a761a19cf8e83a1abd24844aa23143ff665dd0741d543bd15"} Jan 21 09:43:52 crc kubenswrapper[5113]: I0121 09:43:52.852569 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.9783991749999998 podStartE2EDuration="9.852549195s" podCreationTimestamp="2026-01-21 09:43:43 +0000 UTC" firstStartedPulling="2026-01-21 09:43:44.337871727 +0000 UTC m=+1553.838698766" lastFinishedPulling="2026-01-21 09:43:52.212021737 +0000 UTC m=+1561.712848786" observedRunningTime="2026-01-21 09:43:52.848895752 +0000 UTC m=+1562.349722811" watchObservedRunningTime="2026-01-21 09:43:52.852549195 +0000 UTC m=+1562.353376264" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.173504 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-m475d"] Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.183144 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-m475d"] Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.183316 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.187050 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.187641 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.187830 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.188247 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.188431 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.188783 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.196862 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197035 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197162 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197238 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197309 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85s5\" (UniqueName: \"kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197382 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.197477 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299440 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299490 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299509 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b85s5\" (UniqueName: \"kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299528 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299552 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299596 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.299630 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.300607 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.300788 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.300794 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.301276 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.301597 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.301673 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.319352 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85s5\" (UniqueName: \"kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5\") pod \"stf-smoketest-smoke1-m475d\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.498817 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.564813 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.573225 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.573356 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.605645 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmwd\" (UniqueName: \"kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd\") pod \"curl\" (UID: \"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc\") " pod="service-telemetry/curl" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.707118 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmwd\" (UniqueName: \"kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd\") pod \"curl\" (UID: \"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc\") " pod="service-telemetry/curl" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.724608 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmwd\" (UniqueName: \"kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd\") pod \"curl\" (UID: \"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc\") " pod="service-telemetry/curl" Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.906682 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 09:43:53 crc kubenswrapper[5113]: W0121 09:43:53.941933 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod297af4e8_245e_4df4_b837_e50e334d7b17.slice/crio-4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d WatchSource:0}: Error finding container 4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d: Status 404 returned error can't find the container with id 4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d Jan 21 09:43:53 crc kubenswrapper[5113]: I0121 09:43:53.945527 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-m475d"] Jan 21 09:43:54 crc kubenswrapper[5113]: I0121 09:43:54.146985 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 21 09:43:54 crc kubenswrapper[5113]: W0121 09:43:54.151231 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf55cb209_1c34_4e87_bd93_efaa2cb8c9fc.slice/crio-04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9 WatchSource:0}: Error finding container 04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9: Status 404 returned error can't find the container with id 04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9 Jan 21 09:43:54 crc kubenswrapper[5113]: I0121 09:43:54.843305 5113 scope.go:117] "RemoveContainer" containerID="f79a82752e02a5e238c5d6f01302276eac76d874f075f0c47689605c2d8a3861" Jan 21 09:43:54 crc kubenswrapper[5113]: I0121 09:43:54.865374 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc","Type":"ContainerStarted","Data":"04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9"} Jan 21 09:43:54 crc kubenswrapper[5113]: I0121 09:43:54.865451 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerStarted","Data":"4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d"} Jan 21 09:43:56 crc kubenswrapper[5113]: I0121 09:43:56.843947 5113 scope.go:117] "RemoveContainer" containerID="db4f3a164ba5ef247b06752c9abe0fc5c6df6d898c2b960b867af49f16a6d183" Jan 21 09:43:56 crc kubenswrapper[5113]: I0121 09:43:56.876784 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z" event={"ID":"ecf101f2-0ba8-4f13-8b28-25f3102f6907","Type":"ContainerStarted","Data":"04cbde1d5995e6a63b3c23e76dc4263da6a4e4669614c8b101f58e1237f5d04d"} Jan 21 09:43:56 crc kubenswrapper[5113]: I0121 09:43:56.881051 5113 generic.go:358] "Generic (PLEG): container finished" podID="f55cb209-1c34-4e87-bd93-efaa2cb8c9fc" containerID="db02bd4a973b4be2bc819dc4f2897831266cdd0fe25c481f22f2a421b974f435" exitCode=0 Jan 21 09:43:56 crc kubenswrapper[5113]: I0121 09:43:56.881162 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc","Type":"ContainerDied","Data":"db02bd4a973b4be2bc819dc4f2897831266cdd0fe25c481f22f2a421b974f435"} Jan 21 09:43:57 crc kubenswrapper[5113]: I0121 09:43:57.843989 5113 scope.go:117] "RemoveContainer" containerID="e0d648e87fee0e395cc1ebdb8e883c3e2c9f75b6da7c8c63c9a451385481aac2" Jan 21 09:43:57 crc kubenswrapper[5113]: I0121 09:43:57.906427 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-585497b7f5-km7ct" event={"ID":"4cc32795-d962-403a-a4b0-b05770e77786","Type":"ContainerStarted","Data":"a8560d19be5833c86c87ccf8337f942bc2386c12a5728103cfcdbfd797b50db1"} Jan 21 09:43:58 crc kubenswrapper[5113]: I0121 09:43:58.843905 5113 scope.go:117] "RemoveContainer" containerID="192490a20cbf60cf817c8f2b62866642b26bd2709c1d2548e6215f7d48544c90" Jan 21 09:43:58 crc kubenswrapper[5113]: I0121 09:43:58.844137 5113 scope.go:117] "RemoveContainer" containerID="4666cc87b1953b92bff01f6a7be1271ca350c67788513c59d59c6b9a02877d97" Jan 21 09:43:59 crc kubenswrapper[5113]: I0121 09:43:59.843611 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:43:59 crc kubenswrapper[5113]: E0121 09:43:59.844090 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.122461 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483144-l5frf"] Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.557989 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483144-l5frf"] Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.558236 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.561086 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.562500 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.564629 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.608552 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wh7j\" (UniqueName: \"kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j\") pod \"auto-csr-approver-29483144-l5frf\" (UID: \"ef99b817-efc5-44b9-91c3-f4eaddc83ee5\") " pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.710010 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh7j\" (UniqueName: \"kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j\") pod \"auto-csr-approver-29483144-l5frf\" (UID: \"ef99b817-efc5-44b9-91c3-f4eaddc83ee5\") " pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.740789 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wh7j\" (UniqueName: \"kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j\") pod \"auto-csr-approver-29483144-l5frf\" (UID: \"ef99b817-efc5-44b9-91c3-f4eaddc83ee5\") " pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:00 crc kubenswrapper[5113]: I0121 09:44:00.899494 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.304324 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.348096 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvmwd\" (UniqueName: \"kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd\") pod \"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc\" (UID: \"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc\") " Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.355536 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd" (OuterVolumeSpecName: "kube-api-access-rvmwd") pod "f55cb209-1c34-4e87-bd93-efaa2cb8c9fc" (UID: "f55cb209-1c34-4e87-bd93-efaa2cb8c9fc"). InnerVolumeSpecName "kube-api-access-rvmwd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.450197 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rvmwd\" (UniqueName: \"kubernetes.io/projected/f55cb209-1c34-4e87-bd93-efaa2cb8c9fc-kube-api-access-rvmwd\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.463673 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_f55cb209-1c34-4e87-bd93-efaa2cb8c9fc/curl/0.log" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.735672 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-nvtcp_904fae67-943b-4c4e-b2a9-969896ca1635/prometheus-webhook-snmp/0.log" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.958012 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.958134 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"f55cb209-1c34-4e87-bd93-efaa2cb8c9fc","Type":"ContainerDied","Data":"04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9"} Jan 21 09:44:03 crc kubenswrapper[5113]: I0121 09:44:03.958175 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c903899b8dafff28adccb7c5ece5c6ad2d987f91f074e55467de47168cf3f9" Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.729283 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483144-l5frf"] Jan 21 09:44:04 crc kubenswrapper[5113]: W0121 09:44:04.740010 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef99b817_efc5_44b9_91c3_f4eaddc83ee5.slice/crio-8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42 WatchSource:0}: Error finding container 8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42: Status 404 returned error can't find the container with id 8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42 Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.966052 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerStarted","Data":"b21f6df8a8f3cfc4b134c61d38e01e8f111687bce2121b893d2e12b3a6d62dc6"} Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.968918 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst" event={"ID":"44130ffd-90a2-4b98-b98a-28c10f45a9ca","Type":"ContainerStarted","Data":"f017215419cf8a907fcbb332b142021b4a27afec921194c318b382363bec74af"} Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.971328 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq" event={"ID":"aa026e59-d4cb-4580-b908-a08b359465a2","Type":"ContainerStarted","Data":"708f246cbbca894130612e76e3deb3e4adf21c93061d8dac592edfe60034dd45"} Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.975213 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-b67f79854-pcpch" event={"ID":"8d650442-026a-4ce5-9d25-8354edb3df27","Type":"ContainerStarted","Data":"e051bbabf74109d303de37d0785196465a20ba6c618e256d1440a560a339a723"} Jan 21 09:44:04 crc kubenswrapper[5113]: I0121 09:44:04.976635 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483144-l5frf" event={"ID":"ef99b817-efc5-44b9-91c3-f4eaddc83ee5","Type":"ContainerStarted","Data":"8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42"} Jan 21 09:44:06 crc kubenswrapper[5113]: I0121 09:44:06.995819 5113 generic.go:358] "Generic (PLEG): container finished" podID="ef99b817-efc5-44b9-91c3-f4eaddc83ee5" containerID="e5f8f1135b8459a6a467ffdd569f2a2f3fc6b039c5d43e53d4150ccfad5cb9ea" exitCode=0 Jan 21 09:44:06 crc kubenswrapper[5113]: I0121 09:44:06.995966 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483144-l5frf" event={"ID":"ef99b817-efc5-44b9-91c3-f4eaddc83ee5","Type":"ContainerDied","Data":"e5f8f1135b8459a6a467ffdd569f2a2f3fc6b039c5d43e53d4150ccfad5cb9ea"} Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.032875 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483144-l5frf" event={"ID":"ef99b817-efc5-44b9-91c3-f4eaddc83ee5","Type":"ContainerDied","Data":"8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42"} Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.033499 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8949aba279426305978c97f6848cf17169db527d53a54d5ade8afbce911f9a42" Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.061239 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.168499 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wh7j\" (UniqueName: \"kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j\") pod \"ef99b817-efc5-44b9-91c3-f4eaddc83ee5\" (UID: \"ef99b817-efc5-44b9-91c3-f4eaddc83ee5\") " Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.172975 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j" (OuterVolumeSpecName: "kube-api-access-8wh7j") pod "ef99b817-efc5-44b9-91c3-f4eaddc83ee5" (UID: "ef99b817-efc5-44b9-91c3-f4eaddc83ee5"). InnerVolumeSpecName "kube-api-access-8wh7j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:44:11 crc kubenswrapper[5113]: I0121 09:44:11.270288 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wh7j\" (UniqueName: \"kubernetes.io/projected/ef99b817-efc5-44b9-91c3-f4eaddc83ee5-kube-api-access-8wh7j\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.044322 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerStarted","Data":"a90eec319c7dfe7b650e8bb98b9cd97723a805be92b0251efbcc747ea248f2cb"} Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.044582 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483144-l5frf" Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.066685 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-m475d" podStartSLOduration=1.933309379 podStartE2EDuration="19.066662278s" podCreationTimestamp="2026-01-21 09:43:53 +0000 UTC" firstStartedPulling="2026-01-21 09:43:53.944862541 +0000 UTC m=+1563.445689590" lastFinishedPulling="2026-01-21 09:44:11.07821544 +0000 UTC m=+1580.579042489" observedRunningTime="2026-01-21 09:44:12.066126363 +0000 UTC m=+1581.566953422" watchObservedRunningTime="2026-01-21 09:44:12.066662278 +0000 UTC m=+1581.567489337" Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.128398 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483138-lm7vc"] Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.133842 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483138-lm7vc"] Jan 21 09:44:12 crc kubenswrapper[5113]: I0121 09:44:12.857076 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b70962-5d67-4607-bcb9-9fa274469d26" path="/var/lib/kubelet/pods/05b70962-5d67-4607-bcb9-9fa274469d26/volumes" Jan 21 09:44:14 crc kubenswrapper[5113]: I0121 09:44:14.843783 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:44:14 crc kubenswrapper[5113]: E0121 09:44:14.844550 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:44:25 crc kubenswrapper[5113]: I0121 09:44:25.844146 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:44:25 crc kubenswrapper[5113]: E0121 09:44:25.845108 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:44:33 crc kubenswrapper[5113]: I0121 09:44:33.907583 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-nvtcp_904fae67-943b-4c4e-b2a9-969896ca1635/prometheus-webhook-snmp/0.log" Jan 21 09:44:39 crc kubenswrapper[5113]: I0121 09:44:39.305636 5113 generic.go:358] "Generic (PLEG): container finished" podID="297af4e8-245e-4df4-b837-e50e334d7b17" containerID="b21f6df8a8f3cfc4b134c61d38e01e8f111687bce2121b893d2e12b3a6d62dc6" exitCode=0 Jan 21 09:44:39 crc kubenswrapper[5113]: I0121 09:44:39.305726 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerDied","Data":"b21f6df8a8f3cfc4b134c61d38e01e8f111687bce2121b893d2e12b3a6d62dc6"} Jan 21 09:44:39 crc kubenswrapper[5113]: I0121 09:44:39.307166 5113 scope.go:117] "RemoveContainer" containerID="b21f6df8a8f3cfc4b134c61d38e01e8f111687bce2121b893d2e12b3a6d62dc6" Jan 21 09:44:39 crc kubenswrapper[5113]: I0121 09:44:39.843514 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:44:39 crc kubenswrapper[5113]: E0121 09:44:39.843994 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:44:43 crc kubenswrapper[5113]: I0121 09:44:43.343941 5113 generic.go:358] "Generic (PLEG): container finished" podID="297af4e8-245e-4df4-b837-e50e334d7b17" containerID="a90eec319c7dfe7b650e8bb98b9cd97723a805be92b0251efbcc747ea248f2cb" exitCode=0 Jan 21 09:44:43 crc kubenswrapper[5113]: I0121 09:44:43.344074 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerDied","Data":"a90eec319c7dfe7b650e8bb98b9cd97723a805be92b0251efbcc747ea248f2cb"} Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.651325 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667017 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667114 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667151 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667247 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667275 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667292 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b85s5\" (UniqueName: \"kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.667325 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log\") pod \"297af4e8-245e-4df4-b837-e50e334d7b17\" (UID: \"297af4e8-245e-4df4-b837-e50e334d7b17\") " Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.684040 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5" (OuterVolumeSpecName: "kube-api-access-b85s5") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "kube-api-access-b85s5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.701586 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.704306 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.705495 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.708381 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.718132 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.720377 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "297af4e8-245e-4df4-b837-e50e334d7b17" (UID: "297af4e8-245e-4df4-b837-e50e334d7b17"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769646 5113 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769698 5113 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769720 5113 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769745 5113 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769789 5113 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769806 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b85s5\" (UniqueName: \"kubernetes.io/projected/297af4e8-245e-4df4-b837-e50e334d7b17-kube-api-access-b85s5\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:44 crc kubenswrapper[5113]: I0121 09:44:44.769823 5113 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/297af4e8-245e-4df4-b837-e50e334d7b17-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 21 09:44:45 crc kubenswrapper[5113]: I0121 09:44:45.378485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-m475d" event={"ID":"297af4e8-245e-4df4-b837-e50e334d7b17","Type":"ContainerDied","Data":"4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d"} Jan 21 09:44:45 crc kubenswrapper[5113]: I0121 09:44:45.378562 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e08a39f7e34d1968f5b4d4ae842a56657e79f9d2d7984083836da2bc7bafd3d" Jan 21 09:44:45 crc kubenswrapper[5113]: I0121 09:44:45.378904 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-m475d" Jan 21 09:44:46 crc kubenswrapper[5113]: I0121 09:44:46.862498 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-m475d_297af4e8-245e-4df4-b837-e50e334d7b17/smoketest-collectd/0.log" Jan 21 09:44:47 crc kubenswrapper[5113]: I0121 09:44:47.171068 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-m475d_297af4e8-245e-4df4-b837-e50e334d7b17/smoketest-ceilometer/0.log" Jan 21 09:44:47 crc kubenswrapper[5113]: I0121 09:44:47.488586 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-fqvvc_a845c762-53f6-44eb-9c7a-31755d333fe4/default-interconnect/0.log" Jan 21 09:44:47 crc kubenswrapper[5113]: I0121 09:44:47.843020 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/bridge/2.log" Jan 21 09:44:48 crc kubenswrapper[5113]: I0121 09:44:48.210037 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/sg-core/0.log" Jan 21 09:44:48 crc kubenswrapper[5113]: I0121 09:44:48.529187 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_4cc32795-d962-403a-a4b0-b05770e77786/bridge/2.log" Jan 21 09:44:48 crc kubenswrapper[5113]: I0121 09:44:48.806784 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_4cc32795-d962-403a-a4b0-b05770e77786/sg-core/0.log" Jan 21 09:44:49 crc kubenswrapper[5113]: I0121 09:44:49.134064 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/bridge/2.log" Jan 21 09:44:49 crc kubenswrapper[5113]: I0121 09:44:49.463226 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/sg-core/0.log" Jan 21 09:44:49 crc kubenswrapper[5113]: I0121 09:44:49.803273 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_8d650442-026a-4ce5-9d25-8354edb3df27/bridge/2.log" Jan 21 09:44:50 crc kubenswrapper[5113]: I0121 09:44:50.128980 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_8d650442-026a-4ce5-9d25-8354edb3df27/sg-core/0.log" Jan 21 09:44:50 crc kubenswrapper[5113]: I0121 09:44:50.452432 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/bridge/2.log" Jan 21 09:44:50 crc kubenswrapper[5113]: I0121 09:44:50.749336 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/sg-core/0.log" Jan 21 09:44:54 crc kubenswrapper[5113]: I0121 09:44:54.301774 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5688757f5c-tvmkz_a0cf7e4c-4911-4f2f-8309-b3a890282b6e/operator/0.log" Jan 21 09:44:54 crc kubenswrapper[5113]: I0121 09:44:54.621028 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7daab145-3025-4d93-bb61-8921bd849a13/prometheus/0.log" Jan 21 09:44:54 crc kubenswrapper[5113]: I0121 09:44:54.848254 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:44:54 crc kubenswrapper[5113]: E0121 09:44:54.849310 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:44:54 crc kubenswrapper[5113]: I0121 09:44:54.909675 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_961a978d-fbd8-415d-a41f-b80b9693e721/elasticsearch/0.log" Jan 21 09:44:55 crc kubenswrapper[5113]: I0121 09:44:55.231214 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-nvtcp_904fae67-943b-4c4e-b2a9-969896ca1635/prometheus-webhook-snmp/0.log" Jan 21 09:44:55 crc kubenswrapper[5113]: I0121 09:44:55.593208 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_c54c1065-fd71-4792-95c5-555b4af863c4/alertmanager/0.log" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.152206 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns"] Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154012 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-ceilometer" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154040 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-ceilometer" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154063 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ef99b817-efc5-44b9-91c3-f4eaddc83ee5" containerName="oc" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154075 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef99b817-efc5-44b9-91c3-f4eaddc83ee5" containerName="oc" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154109 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-collectd" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154121 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-collectd" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154170 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f55cb209-1c34-4e87-bd93-efaa2cb8c9fc" containerName="curl" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154181 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55cb209-1c34-4e87-bd93-efaa2cb8c9fc" containerName="curl" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154431 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ef99b817-efc5-44b9-91c3-f4eaddc83ee5" containerName="oc" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154462 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-collectd" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154477 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f55cb209-1c34-4e87-bd93-efaa2cb8c9fc" containerName="curl" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.154496 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="297af4e8-245e-4df4-b837-e50e334d7b17" containerName="smoketest-ceilometer" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.172588 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns"] Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.172797 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.176085 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.176211 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.226187 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.226859 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kv7\" (UniqueName: \"kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.227070 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.329140 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.329314 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kv7\" (UniqueName: \"kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.329444 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.330912 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.339727 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.355445 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kv7\" (UniqueName: \"kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7\") pod \"collect-profiles-29483145-mhvns\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.496646 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.595649 5113 scope.go:117] "RemoveContainer" containerID="63bb2655f5b899c4a4115f7e88337134cd2eea000a900526308dbd83aad7fcca" Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.772092 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns"] Jan 21 09:45:00 crc kubenswrapper[5113]: W0121 09:45:00.779452 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3729a63_f276_462b_a69f_3ed67c756dce.slice/crio-8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b WatchSource:0}: Error finding container 8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b: Status 404 returned error can't find the container with id 8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b Jan 21 09:45:00 crc kubenswrapper[5113]: I0121 09:45:00.781623 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:45:01 crc kubenswrapper[5113]: I0121 09:45:01.520866 5113 generic.go:358] "Generic (PLEG): container finished" podID="b3729a63-f276-462b-a69f-3ed67c756dce" containerID="3661d8362c116f31912869a0e3281bfa1ac63faa30dc35ed5bc1e15bdc45f2c0" exitCode=0 Jan 21 09:45:01 crc kubenswrapper[5113]: I0121 09:45:01.521212 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" event={"ID":"b3729a63-f276-462b-a69f-3ed67c756dce","Type":"ContainerDied","Data":"3661d8362c116f31912869a0e3281bfa1ac63faa30dc35ed5bc1e15bdc45f2c0"} Jan 21 09:45:01 crc kubenswrapper[5113]: I0121 09:45:01.521272 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" event={"ID":"b3729a63-f276-462b-a69f-3ed67c756dce","Type":"ContainerStarted","Data":"8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b"} Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.733225 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.792702 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kv7\" (UniqueName: \"kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7\") pod \"b3729a63-f276-462b-a69f-3ed67c756dce\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.792940 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume\") pod \"b3729a63-f276-462b-a69f-3ed67c756dce\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.793004 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume\") pod \"b3729a63-f276-462b-a69f-3ed67c756dce\" (UID: \"b3729a63-f276-462b-a69f-3ed67c756dce\") " Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.793690 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume" (OuterVolumeSpecName: "config-volume") pod "b3729a63-f276-462b-a69f-3ed67c756dce" (UID: "b3729a63-f276-462b-a69f-3ed67c756dce"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.802177 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b3729a63-f276-462b-a69f-3ed67c756dce" (UID: "b3729a63-f276-462b-a69f-3ed67c756dce"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.808015 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7" (OuterVolumeSpecName: "kube-api-access-j6kv7") pod "b3729a63-f276-462b-a69f-3ed67c756dce" (UID: "b3729a63-f276-462b-a69f-3ed67c756dce"). InnerVolumeSpecName "kube-api-access-j6kv7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.894351 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3729a63-f276-462b-a69f-3ed67c756dce-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.894382 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b3729a63-f276-462b-a69f-3ed67c756dce-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 09:45:02 crc kubenswrapper[5113]: I0121 09:45:02.894391 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6kv7\" (UniqueName: \"kubernetes.io/projected/b3729a63-f276-462b-a69f-3ed67c756dce-kube-api-access-j6kv7\") on node \"crc\" DevicePath \"\"" Jan 21 09:45:03 crc kubenswrapper[5113]: I0121 09:45:03.541214 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" Jan 21 09:45:03 crc kubenswrapper[5113]: I0121 09:45:03.541298 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns" event={"ID":"b3729a63-f276-462b-a69f-3ed67c756dce","Type":"ContainerDied","Data":"8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b"} Jan 21 09:45:03 crc kubenswrapper[5113]: I0121 09:45:03.541832 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3d0dc245f44a015d6baf404fc939d05945c099365e42efea8d9672c749aa3b" Jan 21 09:45:08 crc kubenswrapper[5113]: I0121 09:45:08.843614 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:45:08 crc kubenswrapper[5113]: E0121 09:45:08.844632 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:45:11 crc kubenswrapper[5113]: I0121 09:45:11.543506 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6c4754584f-gmqc4_34e7f06e-075f-4ccf-a706-5a744ef37c25/operator/0.log" Jan 21 09:45:15 crc kubenswrapper[5113]: I0121 09:45:15.409765 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5688757f5c-tvmkz_a0cf7e4c-4911-4f2f-8309-b3a890282b6e/operator/0.log" Jan 21 09:45:15 crc kubenswrapper[5113]: I0121 09:45:15.754497 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_c151bef3-8a92-45e0-bfeb-53f319f6d6ce/qdr/0.log" Jan 21 09:45:19 crc kubenswrapper[5113]: I0121 09:45:19.844344 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:45:19 crc kubenswrapper[5113]: E0121 09:45:19.847177 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:45:33 crc kubenswrapper[5113]: I0121 09:45:33.844202 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:45:33 crc kubenswrapper[5113]: E0121 09:45:33.845426 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.920363 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldwgr/must-gather-vhp4z"] Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.922430 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3729a63-f276-462b-a69f-3ed67c756dce" containerName="collect-profiles" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.922461 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3729a63-f276-462b-a69f-3ed67c756dce" containerName="collect-profiles" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.922683 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3729a63-f276-462b-a69f-3ed67c756dce" containerName="collect-profiles" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.931116 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.933811 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-ldwgr\"/\"default-dockercfg-9zd4b\"" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.933851 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-ldwgr\"/\"kube-root-ca.crt\"" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.936823 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-ldwgr\"/\"openshift-service-ca.crt\"" Jan 21 09:45:41 crc kubenswrapper[5113]: I0121 09:45:41.937656 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldwgr/must-gather-vhp4z"] Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.025181 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ghn\" (UniqueName: \"kubernetes.io/projected/259f9f63-2c73-455c-8290-62c6acf0d06f-kube-api-access-77ghn\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.025280 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/259f9f63-2c73-455c-8290-62c6acf0d06f-must-gather-output\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.126807 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/259f9f63-2c73-455c-8290-62c6acf0d06f-must-gather-output\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.126881 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77ghn\" (UniqueName: \"kubernetes.io/projected/259f9f63-2c73-455c-8290-62c6acf0d06f-kube-api-access-77ghn\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.127618 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/259f9f63-2c73-455c-8290-62c6acf0d06f-must-gather-output\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.145132 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77ghn\" (UniqueName: \"kubernetes.io/projected/259f9f63-2c73-455c-8290-62c6acf0d06f-kube-api-access-77ghn\") pod \"must-gather-vhp4z\" (UID: \"259f9f63-2c73-455c-8290-62c6acf0d06f\") " pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.249273 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.518026 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldwgr/must-gather-vhp4z"] Jan 21 09:45:42 crc kubenswrapper[5113]: W0121 09:45:42.522081 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod259f9f63_2c73_455c_8290_62c6acf0d06f.slice/crio-18223e97b5f60966f9b7208b392c0e40f483cd44d38c9f4eeb0caf5e2d689984 WatchSource:0}: Error finding container 18223e97b5f60966f9b7208b392c0e40f483cd44d38c9f4eeb0caf5e2d689984: Status 404 returned error can't find the container with id 18223e97b5f60966f9b7208b392c0e40f483cd44d38c9f4eeb0caf5e2d689984 Jan 21 09:45:42 crc kubenswrapper[5113]: I0121 09:45:42.925622 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" event={"ID":"259f9f63-2c73-455c-8290-62c6acf0d06f","Type":"ContainerStarted","Data":"18223e97b5f60966f9b7208b392c0e40f483cd44d38c9f4eeb0caf5e2d689984"} Jan 21 09:45:47 crc kubenswrapper[5113]: I0121 09:45:47.843300 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:45:47 crc kubenswrapper[5113]: E0121 09:45:47.844096 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:45:48 crc kubenswrapper[5113]: I0121 09:45:48.979366 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" event={"ID":"259f9f63-2c73-455c-8290-62c6acf0d06f","Type":"ContainerStarted","Data":"7b54355431e88edfd2cbd0beafddecb99da4a74e1af5cbf94660c2dbe03cfc18"} Jan 21 09:45:48 crc kubenswrapper[5113]: I0121 09:45:48.979982 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" event={"ID":"259f9f63-2c73-455c-8290-62c6acf0d06f","Type":"ContainerStarted","Data":"79506873caa8699402b5f21be9915c80b0a38495d1aa6d72900549b88036cfb6"} Jan 21 09:45:59 crc kubenswrapper[5113]: I0121 09:45:59.844011 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:45:59 crc kubenswrapper[5113]: E0121 09:45:59.845396 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:46:00 crc kubenswrapper[5113]: I0121 09:46:00.147731 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ldwgr/must-gather-vhp4z" podStartSLOduration=13.579996757 podStartE2EDuration="19.147704312s" podCreationTimestamp="2026-01-21 09:45:41 +0000 UTC" firstStartedPulling="2026-01-21 09:45:42.526147972 +0000 UTC m=+1672.026975041" lastFinishedPulling="2026-01-21 09:45:48.093855517 +0000 UTC m=+1677.594682596" observedRunningTime="2026-01-21 09:45:49.009807624 +0000 UTC m=+1678.510634673" watchObservedRunningTime="2026-01-21 09:46:00.147704312 +0000 UTC m=+1689.648531401" Jan 21 09:46:00 crc kubenswrapper[5113]: I0121 09:46:00.149950 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483146-xsgkz"] Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.205625 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483146-xsgkz"] Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.205913 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.211096 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.211771 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.211984 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.275177 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjmt\" (UniqueName: \"kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt\") pod \"auto-csr-approver-29483146-xsgkz\" (UID: \"e17b1571-e260-4edb-b107-3ae3ac7357e2\") " pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.376819 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-njjmt\" (UniqueName: \"kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt\") pod \"auto-csr-approver-29483146-xsgkz\" (UID: \"e17b1571-e260-4edb-b107-3ae3ac7357e2\") " pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.406144 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjmt\" (UniqueName: \"kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt\") pod \"auto-csr-approver-29483146-xsgkz\" (UID: \"e17b1571-e260-4edb-b107-3ae3ac7357e2\") " pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.541371 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:01 crc kubenswrapper[5113]: I0121 09:46:01.816120 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483146-xsgkz"] Jan 21 09:46:02 crc kubenswrapper[5113]: I0121 09:46:02.102956 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" event={"ID":"e17b1571-e260-4edb-b107-3ae3ac7357e2","Type":"ContainerStarted","Data":"68b7922d82be10cf779d5793a7ecad976beb2db547ff0e40f28493d8932025ea"} Jan 21 09:46:03 crc kubenswrapper[5113]: I0121 09:46:03.114768 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" event={"ID":"e17b1571-e260-4edb-b107-3ae3ac7357e2","Type":"ContainerStarted","Data":"4f257d0ef81b2106ec94f9756024c20169dee6f7885515d978c95b7be345001c"} Jan 21 09:46:03 crc kubenswrapper[5113]: I0121 09:46:03.129309 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" podStartSLOduration=2.298990287 podStartE2EDuration="3.129283113s" podCreationTimestamp="2026-01-21 09:46:00 +0000 UTC" firstStartedPulling="2026-01-21 09:46:01.821338816 +0000 UTC m=+1691.322165875" lastFinishedPulling="2026-01-21 09:46:02.651631602 +0000 UTC m=+1692.152458701" observedRunningTime="2026-01-21 09:46:03.127339808 +0000 UTC m=+1692.628166917" watchObservedRunningTime="2026-01-21 09:46:03.129283113 +0000 UTC m=+1692.630110192" Jan 21 09:46:03 crc kubenswrapper[5113]: I0121 09:46:03.709176 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-q9rxd_248b0806-1d40-4954-a74e-6282de18ff7b/control-plane-machine-set-operator/0.log" Jan 21 09:46:03 crc kubenswrapper[5113]: I0121 09:46:03.742677 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-qtl4r_d9041fd4-ea1a-453e-b9c6-efe382434cc0/kube-rbac-proxy/0.log" Jan 21 09:46:03 crc kubenswrapper[5113]: I0121 09:46:03.755816 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-qtl4r_d9041fd4-ea1a-453e-b9c6-efe382434cc0/machine-api-operator/0.log" Jan 21 09:46:04 crc kubenswrapper[5113]: I0121 09:46:04.125938 5113 generic.go:358] "Generic (PLEG): container finished" podID="e17b1571-e260-4edb-b107-3ae3ac7357e2" containerID="4f257d0ef81b2106ec94f9756024c20169dee6f7885515d978c95b7be345001c" exitCode=0 Jan 21 09:46:04 crc kubenswrapper[5113]: I0121 09:46:04.125995 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" event={"ID":"e17b1571-e260-4edb-b107-3ae3ac7357e2","Type":"ContainerDied","Data":"4f257d0ef81b2106ec94f9756024c20169dee6f7885515d978c95b7be345001c"} Jan 21 09:46:05 crc kubenswrapper[5113]: I0121 09:46:05.407449 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:05 crc kubenswrapper[5113]: I0121 09:46:05.543107 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njjmt\" (UniqueName: \"kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt\") pod \"e17b1571-e260-4edb-b107-3ae3ac7357e2\" (UID: \"e17b1571-e260-4edb-b107-3ae3ac7357e2\") " Jan 21 09:46:05 crc kubenswrapper[5113]: I0121 09:46:05.554585 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt" (OuterVolumeSpecName: "kube-api-access-njjmt") pod "e17b1571-e260-4edb-b107-3ae3ac7357e2" (UID: "e17b1571-e260-4edb-b107-3ae3ac7357e2"). InnerVolumeSpecName "kube-api-access-njjmt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:46:05 crc kubenswrapper[5113]: I0121 09:46:05.644920 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-njjmt\" (UniqueName: \"kubernetes.io/projected/e17b1571-e260-4edb-b107-3ae3ac7357e2-kube-api-access-njjmt\") on node \"crc\" DevicePath \"\"" Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.151330 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" event={"ID":"e17b1571-e260-4edb-b107-3ae3ac7357e2","Type":"ContainerDied","Data":"68b7922d82be10cf779d5793a7ecad976beb2db547ff0e40f28493d8932025ea"} Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.151573 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68b7922d82be10cf779d5793a7ecad976beb2db547ff0e40f28493d8932025ea" Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.151369 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483146-xsgkz" Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.213560 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483140-k6zdk"] Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.221156 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483140-k6zdk"] Jan 21 09:46:06 crc kubenswrapper[5113]: I0121 09:46:06.855119 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bceb22-9325-4345-aea0-c2a251183b10" path="/var/lib/kubelet/pods/e4bceb22-9325-4345-aea0-c2a251183b10/volumes" Jan 21 09:46:09 crc kubenswrapper[5113]: I0121 09:46:09.473310 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-qnm4h_36704923-46ee-4c88-8aa3-d789474e6929/cert-manager-controller/0.log" Jan 21 09:46:09 crc kubenswrapper[5113]: I0121 09:46:09.489931 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-2g29d_3bb23691-eea9-41a3-b66f-ecef43808bd5/cert-manager-cainjector/0.log" Jan 21 09:46:09 crc kubenswrapper[5113]: I0121 09:46:09.501865 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-jzwdf_cb5149c7-b193-48df-903d-729ae193fca0/cert-manager-webhook/0.log" Jan 21 09:46:13 crc kubenswrapper[5113]: I0121 09:46:13.844137 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:46:13 crc kubenswrapper[5113]: E0121 09:46:13.845816 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:46:15 crc kubenswrapper[5113]: I0121 09:46:15.195286 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-vpdxw_7944be2a-7f45-495b-90e5-b31570149a43/prometheus-operator/0.log" Jan 21 09:46:15 crc kubenswrapper[5113]: I0121 09:46:15.206406 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4_3843f0f9-ae8b-4934-a635-75e80ae8379d/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:15 crc kubenswrapper[5113]: I0121 09:46:15.226003 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp_83b782b2-ea6f-4d32-a56b-7c8ad0c39688/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:15 crc kubenswrapper[5113]: I0121 09:46:15.245103 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-5jl7x_1ac4f96e-4018-4e2d-8a80-2eff7c26c08e/operator/0.log" Jan 21 09:46:15 crc kubenswrapper[5113]: I0121 09:46:15.255333 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-mfk8j_a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e/perses-operator/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.570803 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b_52e0414b-6283-42ab-9e76-609f811f45c8/extract/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.581774 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b_52e0414b-6283-42ab-9e76-609f811f45c8/util/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.639647 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a8ll4b_52e0414b-6283-42ab-9e76-609f811f45c8/pull/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.663905 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw_95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe/extract/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.680852 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw_95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe/util/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.689602 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnb4sw_95a7b07e-a0b7-4920-a2d1-c0cc26ba60fe/pull/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.703236 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w_6464cb6c-1ad4-4eef-b492-4351e8fb8d3a/extract/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.711845 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w_6464cb6c-1ad4-4eef-b492-4351e8fb8d3a/util/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.718056 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejfl7w_6464cb6c-1ad4-4eef-b492-4351e8fb8d3a/pull/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.738804 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp_7284cc68-b573-49c7-b1cd-3c46715c1604/extract/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.744427 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp_7284cc68-b573-49c7-b1cd-3c46715c1604/util/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.754275 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086mvfp_7284cc68-b573-49c7-b1cd-3c46715c1604/pull/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.970393 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nsgd7_8e56cc9d-707d-40d1-9ea5-29233a2270ea/registry-server/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.976520 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nsgd7_8e56cc9d-707d-40d1-9ea5-29233a2270ea/extract-utilities/0.log" Jan 21 09:46:21 crc kubenswrapper[5113]: I0121 09:46:21.983258 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-nsgd7_8e56cc9d-707d-40d1-9ea5-29233a2270ea/extract-content/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.200505 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5jhqk_0fc5a81a-2441-41ce-9c03-99533e7c0fc5/registry-server/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.205085 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5jhqk_0fc5a81a-2441-41ce-9c03-99533e7c0fc5/extract-utilities/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.210009 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5jhqk_0fc5a81a-2441-41ce-9c03-99533e7c0fc5/extract-content/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.235718 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-w69cp_705df55e-0346-4051-a2db-cba821b3ef8c/marketplace-operator/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.449383 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wh5g8_46b7f551-fb42-475f-8c13-7810f0eed33e/registry-server/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.454412 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wh5g8_46b7f551-fb42-475f-8c13-7810f0eed33e/extract-utilities/0.log" Jan 21 09:46:22 crc kubenswrapper[5113]: I0121 09:46:22.461230 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wh5g8_46b7f551-fb42-475f-8c13-7810f0eed33e/extract-content/0.log" Jan 21 09:46:27 crc kubenswrapper[5113]: I0121 09:46:27.407533 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-vpdxw_7944be2a-7f45-495b-90e5-b31570149a43/prometheus-operator/0.log" Jan 21 09:46:27 crc kubenswrapper[5113]: I0121 09:46:27.419702 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4_3843f0f9-ae8b-4934-a635-75e80ae8379d/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:27 crc kubenswrapper[5113]: I0121 09:46:27.431679 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp_83b782b2-ea6f-4d32-a56b-7c8ad0c39688/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:27 crc kubenswrapper[5113]: I0121 09:46:27.449524 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-5jl7x_1ac4f96e-4018-4e2d-8a80-2eff7c26c08e/operator/0.log" Jan 21 09:46:27 crc kubenswrapper[5113]: I0121 09:46:27.461379 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-mfk8j_a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e/perses-operator/0.log" Jan 21 09:46:28 crc kubenswrapper[5113]: I0121 09:46:28.844001 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:46:28 crc kubenswrapper[5113]: E0121 09:46:28.846185 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.684098 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-vpdxw_7944be2a-7f45-495b-90e5-b31570149a43/prometheus-operator/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.698095 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-sr4n4_3843f0f9-ae8b-4934-a635-75e80ae8379d/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.711566 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5ddf46cb8b-xs9qp_83b782b2-ea6f-4d32-a56b-7c8ad0c39688/prometheus-operator-admission-webhook/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.734537 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-5jl7x_1ac4f96e-4018-4e2d-8a80-2eff7c26c08e/operator/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.747070 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-mfk8j_a33c9424-1bbb-4a0c-9fb2-fd5eb6de667e/perses-operator/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.799112 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-qnm4h_36704923-46ee-4c88-8aa3-d789474e6929/cert-manager-controller/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.809841 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-2g29d_3bb23691-eea9-41a3-b66f-ecef43808bd5/cert-manager-cainjector/0.log" Jan 21 09:46:37 crc kubenswrapper[5113]: I0121 09:46:37.824704 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-jzwdf_cb5149c7-b193-48df-903d-729ae193fca0/cert-manager-webhook/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.336959 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-qnm4h_36704923-46ee-4c88-8aa3-d789474e6929/cert-manager-controller/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.350405 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-2g29d_3bb23691-eea9-41a3-b66f-ecef43808bd5/cert-manager-cainjector/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.360330 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-jzwdf_cb5149c7-b193-48df-903d-729ae193fca0/cert-manager-webhook/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.781495 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-q9rxd_248b0806-1d40-4954-a74e-6282de18ff7b/control-plane-machine-set-operator/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.791191 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-qtl4r_d9041fd4-ea1a-453e-b9c6-efe382434cc0/kube-rbac-proxy/0.log" Jan 21 09:46:38 crc kubenswrapper[5113]: I0121 09:46:38.800660 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-qtl4r_d9041fd4-ea1a-453e-b9c6-efe382434cc0/machine-api-operator/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.401083 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_c54c1065-fd71-4792-95c5-555b4af863c4/alertmanager/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.410475 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_c54c1065-fd71-4792-95c5-555b4af863c4/config-reloader/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.420553 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_c54c1065-fd71-4792-95c5-555b4af863c4/oauth-proxy/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.429926 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_c54c1065-fd71-4792-95c5-555b4af863c4/init-config-reloader/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.443330 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_f55cb209-1c34-4e87-bd93-efaa2cb8c9fc/curl/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.454854 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_8d650442-026a-4ce5-9d25-8354edb3df27/bridge/1.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.454907 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_8d650442-026a-4ce5-9d25-8354edb3df27/bridge/2.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.461230 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-b67f79854-pcpch_8d650442-026a-4ce5-9d25-8354edb3df27/sg-core/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.475950 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/oauth-proxy/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.485525 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/bridge/2.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.486158 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/bridge/1.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.493711 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-bjgfq_aa026e59-d4cb-4580-b908-a08b359465a2/sg-core/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.507818 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_4cc32795-d962-403a-a4b0-b05770e77786/bridge/2.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.509245 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_4cc32795-d962-403a-a4b0-b05770e77786/bridge/1.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.515871 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-585497b7f5-km7ct_4cc32795-d962-403a-a4b0-b05770e77786/sg-core/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.533084 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/oauth-proxy/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.540139 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/bridge/2.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.540920 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/bridge/1.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.548053 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-p9fst_44130ffd-90a2-4b98-b98a-28c10f45a9ca/sg-core/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.564458 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/oauth-proxy/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.575534 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/bridge/1.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.575591 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/bridge/2.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.582471 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-7jq8z_ecf101f2-0ba8-4f13-8b28-25f3102f6907/sg-core/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.602489 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-fqvvc_a845c762-53f6-44eb-9c7a-31755d333fe4/default-interconnect/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.612367 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-nvtcp_904fae67-943b-4c4e-b2a9-969896ca1635/prometheus-webhook-snmp/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.662223 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-578f8f8d6c-f524v_019809be-ccc7-49df-89f9-84eff425459d/manager/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.697874 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_961a978d-fbd8-415d-a41f-b80b9693e721/elasticsearch/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.706263 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_961a978d-fbd8-415d-a41f-b80b9693e721/elastic-internal-init-filesystem/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.713107 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_961a978d-fbd8-415d-a41f-b80b9693e721/elastic-internal-suspend/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.725397 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-78b9bd8798-prrkk_7c1cc988-c5a8-4ee1-a41b-1fd925a848dc/interconnect-operator/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.748097 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7daab145-3025-4d93-bb61-8921bd849a13/prometheus/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.758785 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7daab145-3025-4d93-bb61-8921bd849a13/config-reloader/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.766498 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7daab145-3025-4d93-bb61-8921bd849a13/oauth-proxy/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.776150 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7daab145-3025-4d93-bb61-8921bd849a13/init-config-reloader/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.807991 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_5f329e83-b6df-4338-bd89-08e3346dadf3/docker-build/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.814749 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_5f329e83-b6df-4338-bd89-08e3346dadf3/git-clone/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.820663 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_5f329e83-b6df-4338-bd89-08e3346dadf3/manage-dockerfile/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.837383 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_c151bef3-8a92-45e0-bfeb-53f319f6d6ce/qdr/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.916916 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d/docker-build/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.923607 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d/git-clone/0.log" Jan 21 09:46:39 crc kubenswrapper[5113]: I0121 09:46:39.936222 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f409c2dd-3176-4573-8c3c-4b8a4f3ebc9d/manage-dockerfile/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.208482 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6c4754584f-gmqc4_34e7f06e-075f-4ccf-a706-5a744ef37c25/operator/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.285245 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c985ccf6-8457-45c1-acdc-667628d80d5f/docker-build/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.292988 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c985ccf6-8457-45c1-acdc-667628d80d5f/git-clone/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.298919 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_c985ccf6-8457-45c1-acdc-667628d80d5f/manage-dockerfile/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.359785 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_03769401-1020-4b9e-9638-36fc2c68bb59/docker-build/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.366272 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_03769401-1020-4b9e-9638-36fc2c68bb59/git-clone/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.373190 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_03769401-1020-4b9e-9638-36fc2c68bb59/manage-dockerfile/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.445901 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_b2c9d3e6-d659-4bc9-95a9-e5325d8fd568/docker-build/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.450845 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_b2c9d3e6-d659-4bc9-95a9-e5325d8fd568/git-clone/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.458166 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_b2c9d3e6-d659-4bc9-95a9-e5325d8fd568/manage-dockerfile/0.log" Jan 21 09:46:40 crc kubenswrapper[5113]: I0121 09:46:40.852396 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:46:40 crc kubenswrapper[5113]: E0121 09:46:40.852909 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:46:44 crc kubenswrapper[5113]: I0121 09:46:44.477385 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-5688757f5c-tvmkz_a0cf7e4c-4911-4f2f-8309-b3a890282b6e/operator/0.log" Jan 21 09:46:44 crc kubenswrapper[5113]: I0121 09:46:44.504762 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-m475d_297af4e8-245e-4df4-b837-e50e334d7b17/smoketest-collectd/0.log" Jan 21 09:46:44 crc kubenswrapper[5113]: I0121 09:46:44.512204 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-m475d_297af4e8-245e-4df4-b837-e50e334d7b17/smoketest-ceilometer/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.157237 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/kube-multus-additional-cni-plugins/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.166030 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/egress-router-binary-copy/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.174345 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/cni-plugins/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.180838 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/bond-cni-plugin/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.187216 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/routeoverride-cni/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.193622 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/whereabouts-cni-bincopy/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.199567 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-8ss9n_73ab8d16-75a8-4471-b540-95356246fbfa/whereabouts-cni/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.207872 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-8gwvh_ca8d2ce1-3e0c-44f8-9327-4935a2691c4d/multus-admission-controller/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.218755 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-69db94689b-8gwvh_ca8d2ce1-3e0c-44f8-9327-4935a2691c4d/kube-rbac-proxy/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.275406 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/1.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.282831 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.306097 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tcv7n_0d75af50-e19d-4048-b80e-51dae4c3378e/network-metrics-daemon/0.log" Jan 21 09:46:46 crc kubenswrapper[5113]: I0121 09:46:46.310727 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-tcv7n_0d75af50-e19d-4048-b80e-51dae4c3378e/kube-rbac-proxy/0.log" Jan 21 09:46:55 crc kubenswrapper[5113]: I0121 09:46:55.843945 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:46:55 crc kubenswrapper[5113]: E0121 09:46:55.844666 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:47:00 crc kubenswrapper[5113]: I0121 09:47:00.771301 5113 scope.go:117] "RemoveContainer" containerID="88c32a68ed2974af68a5820922528c9963e3c2e5daa1ab4f784c7b71e6c622dd" Jan 21 09:47:10 crc kubenswrapper[5113]: I0121 09:47:10.858325 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:47:10 crc kubenswrapper[5113]: E0121 09:47:10.859900 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:47:23 crc kubenswrapper[5113]: I0121 09:47:23.844115 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:47:23 crc kubenswrapper[5113]: E0121 09:47:23.847227 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:47:37 crc kubenswrapper[5113]: I0121 09:47:37.844415 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:47:37 crc kubenswrapper[5113]: E0121 09:47:37.845476 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:47:49 crc kubenswrapper[5113]: I0121 09:47:49.843936 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:47:49 crc kubenswrapper[5113]: E0121 09:47:49.844992 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:47:51 crc kubenswrapper[5113]: I0121 09:47:51.635426 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:47:51 crc kubenswrapper[5113]: I0121 09:47:51.656047 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:47:51 crc kubenswrapper[5113]: I0121 09:47:51.662135 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:47:51 crc kubenswrapper[5113]: I0121 09:47:51.676204 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.163400 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483148-bggxm"] Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.167491 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e17b1571-e260-4edb-b107-3ae3ac7357e2" containerName="oc" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.167791 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e17b1571-e260-4edb-b107-3ae3ac7357e2" containerName="oc" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.168255 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e17b1571-e260-4edb-b107-3ae3ac7357e2" containerName="oc" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.185090 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.186140 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483148-bggxm"] Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.187234 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.190338 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.190718 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.316704 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jcg\" (UniqueName: \"kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg\") pod \"auto-csr-approver-29483148-bggxm\" (UID: \"b434b8bd-02e6-4c4a-aff6-e8f529de05ff\") " pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.418377 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jcg\" (UniqueName: \"kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg\") pod \"auto-csr-approver-29483148-bggxm\" (UID: \"b434b8bd-02e6-4c4a-aff6-e8f529de05ff\") " pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.443030 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jcg\" (UniqueName: \"kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg\") pod \"auto-csr-approver-29483148-bggxm\" (UID: \"b434b8bd-02e6-4c4a-aff6-e8f529de05ff\") " pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.512757 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.735075 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483148-bggxm"] Jan 21 09:48:00 crc kubenswrapper[5113]: W0121 09:48:00.744335 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb434b8bd_02e6_4c4a_aff6_e8f529de05ff.slice/crio-c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387 WatchSource:0}: Error finding container c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387: Status 404 returned error can't find the container with id c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387 Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.915851 5113 scope.go:117] "RemoveContainer" containerID="7fb07137f23a9ba79263366b90606063c83123256577286ab26217358bae6085" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.938459 5113 scope.go:117] "RemoveContainer" containerID="bb1546b4f233804ec3cb1fd95230dafcdcdabfd10f4fb494efe4227b4d5fa7dc" Jan 21 09:48:00 crc kubenswrapper[5113]: I0121 09:48:00.965518 5113 scope.go:117] "RemoveContainer" containerID="32ca0d0f2d72f9fd39164778adb2946591f0c9d18cd6acb642f76167e54dc83b" Jan 21 09:48:01 crc kubenswrapper[5113]: I0121 09:48:01.577478 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483148-bggxm" event={"ID":"b434b8bd-02e6-4c4a-aff6-e8f529de05ff","Type":"ContainerStarted","Data":"c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387"} Jan 21 09:48:01 crc kubenswrapper[5113]: I0121 09:48:01.844206 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:48:01 crc kubenswrapper[5113]: E0121 09:48:01.845124 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:48:02 crc kubenswrapper[5113]: I0121 09:48:02.588782 5113 generic.go:358] "Generic (PLEG): container finished" podID="b434b8bd-02e6-4c4a-aff6-e8f529de05ff" containerID="a3d09dbe8d42bf5b6b2fce9d29bdd36482604f5e13c9fbe5c8259d8de384ff1e" exitCode=0 Jan 21 09:48:02 crc kubenswrapper[5113]: I0121 09:48:02.589012 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483148-bggxm" event={"ID":"b434b8bd-02e6-4c4a-aff6-e8f529de05ff","Type":"ContainerDied","Data":"a3d09dbe8d42bf5b6b2fce9d29bdd36482604f5e13c9fbe5c8259d8de384ff1e"} Jan 21 09:48:03 crc kubenswrapper[5113]: I0121 09:48:03.880362 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:03 crc kubenswrapper[5113]: I0121 09:48:03.975404 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jcg\" (UniqueName: \"kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg\") pod \"b434b8bd-02e6-4c4a-aff6-e8f529de05ff\" (UID: \"b434b8bd-02e6-4c4a-aff6-e8f529de05ff\") " Jan 21 09:48:03 crc kubenswrapper[5113]: I0121 09:48:03.983634 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg" (OuterVolumeSpecName: "kube-api-access-v2jcg") pod "b434b8bd-02e6-4c4a-aff6-e8f529de05ff" (UID: "b434b8bd-02e6-4c4a-aff6-e8f529de05ff"). InnerVolumeSpecName "kube-api-access-v2jcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.078010 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2jcg\" (UniqueName: \"kubernetes.io/projected/b434b8bd-02e6-4c4a-aff6-e8f529de05ff-kube-api-access-v2jcg\") on node \"crc\" DevicePath \"\"" Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.605388 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483148-bggxm" event={"ID":"b434b8bd-02e6-4c4a-aff6-e8f529de05ff","Type":"ContainerDied","Data":"c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387"} Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.605458 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a48c8adcf08fc0c194a8f5437c15eb177221b9e5b1900e822c10d1b7adf387" Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.605414 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483148-bggxm" Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.969768 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483142-5qvl9"] Jan 21 09:48:04 crc kubenswrapper[5113]: I0121 09:48:04.980693 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483142-5qvl9"] Jan 21 09:48:06 crc kubenswrapper[5113]: I0121 09:48:06.857644 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c331cef-9fee-4b59-b161-92a5cecbf022" path="/var/lib/kubelet/pods/3c331cef-9fee-4b59-b161-92a5cecbf022/volumes" Jan 21 09:48:14 crc kubenswrapper[5113]: I0121 09:48:14.843964 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:48:14 crc kubenswrapper[5113]: E0121 09:48:14.845284 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:48:26 crc kubenswrapper[5113]: I0121 09:48:26.844551 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:48:26 crc kubenswrapper[5113]: E0121 09:48:26.846203 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:48:40 crc kubenswrapper[5113]: I0121 09:48:40.851500 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:48:41 crc kubenswrapper[5113]: I0121 09:48:41.997349 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e"} Jan 21 09:49:01 crc kubenswrapper[5113]: I0121 09:49:01.040914 5113 scope.go:117] "RemoveContainer" containerID="0d6cc7ae66b3c4785b9e49b4a55e79bf5a2d53a6283d2a5b43974320e1586976" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.146178 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483150-zzq47"] Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.148424 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b434b8bd-02e6-4c4a-aff6-e8f529de05ff" containerName="oc" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.148468 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b434b8bd-02e6-4c4a-aff6-e8f529de05ff" containerName="oc" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.148669 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b434b8bd-02e6-4c4a-aff6-e8f529de05ff" containerName="oc" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.169716 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483150-zzq47"] Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.169952 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.173515 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.174716 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.174971 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.214684 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vhb\" (UniqueName: \"kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb\") pod \"auto-csr-approver-29483150-zzq47\" (UID: \"bac63cd0-23a9-4405-a824-ac30e6ca8192\") " pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.316372 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vhb\" (UniqueName: \"kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb\") pod \"auto-csr-approver-29483150-zzq47\" (UID: \"bac63cd0-23a9-4405-a824-ac30e6ca8192\") " pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.343805 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vhb\" (UniqueName: \"kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb\") pod \"auto-csr-approver-29483150-zzq47\" (UID: \"bac63cd0-23a9-4405-a824-ac30e6ca8192\") " pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.503178 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.963140 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483150-zzq47"] Jan 21 09:50:00 crc kubenswrapper[5113]: W0121 09:50:00.976368 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac63cd0_23a9_4405_a824_ac30e6ca8192.slice/crio-a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933 WatchSource:0}: Error finding container a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933: Status 404 returned error can't find the container with id a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933 Jan 21 09:50:00 crc kubenswrapper[5113]: I0121 09:50:00.978627 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:50:01 crc kubenswrapper[5113]: I0121 09:50:01.792640 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483150-zzq47" event={"ID":"bac63cd0-23a9-4405-a824-ac30e6ca8192","Type":"ContainerStarted","Data":"a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933"} Jan 21 09:50:02 crc kubenswrapper[5113]: I0121 09:50:02.806682 5113 generic.go:358] "Generic (PLEG): container finished" podID="bac63cd0-23a9-4405-a824-ac30e6ca8192" containerID="40e0ff9ef8810f222043a364e172129782229aa013bace6f05119ed7f7f22a8c" exitCode=0 Jan 21 09:50:02 crc kubenswrapper[5113]: I0121 09:50:02.806798 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483150-zzq47" event={"ID":"bac63cd0-23a9-4405-a824-ac30e6ca8192","Type":"ContainerDied","Data":"40e0ff9ef8810f222043a364e172129782229aa013bace6f05119ed7f7f22a8c"} Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.204397 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.301705 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4vhb\" (UniqueName: \"kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb\") pod \"bac63cd0-23a9-4405-a824-ac30e6ca8192\" (UID: \"bac63cd0-23a9-4405-a824-ac30e6ca8192\") " Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.313242 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb" (OuterVolumeSpecName: "kube-api-access-h4vhb") pod "bac63cd0-23a9-4405-a824-ac30e6ca8192" (UID: "bac63cd0-23a9-4405-a824-ac30e6ca8192"). InnerVolumeSpecName "kube-api-access-h4vhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.404266 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h4vhb\" (UniqueName: \"kubernetes.io/projected/bac63cd0-23a9-4405-a824-ac30e6ca8192-kube-api-access-h4vhb\") on node \"crc\" DevicePath \"\"" Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.828431 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483150-zzq47" Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.828433 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483150-zzq47" event={"ID":"bac63cd0-23a9-4405-a824-ac30e6ca8192","Type":"ContainerDied","Data":"a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933"} Jan 21 09:50:04 crc kubenswrapper[5113]: I0121 09:50:04.828978 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a003ae2c5951ae4dc1d1a8c056f4bce51d9e7a53544c66a4ea675b838af89933" Jan 21 09:50:05 crc kubenswrapper[5113]: I0121 09:50:05.293381 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483144-l5frf"] Jan 21 09:50:05 crc kubenswrapper[5113]: I0121 09:50:05.304005 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483144-l5frf"] Jan 21 09:50:06 crc kubenswrapper[5113]: I0121 09:50:06.865116 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef99b817-efc5-44b9-91c3-f4eaddc83ee5" path="/var/lib/kubelet/pods/ef99b817-efc5-44b9-91c3-f4eaddc83ee5/volumes" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.151895 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.153323 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bac63cd0-23a9-4405-a824-ac30e6ca8192" containerName="oc" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.153339 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac63cd0-23a9-4405-a824-ac30e6ca8192" containerName="oc" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.153486 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="bac63cd0-23a9-4405-a824-ac30e6ca8192" containerName="oc" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.174716 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.184572 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.268061 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkzvg\" (UniqueName: \"kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.268296 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.268482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.369464 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bkzvg\" (UniqueName: \"kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.369534 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.369697 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.370572 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.370608 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.401617 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkzvg\" (UniqueName: \"kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg\") pod \"community-operators-z2rds\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:41 crc kubenswrapper[5113]: I0121 09:50:41.505359 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:42 crc kubenswrapper[5113]: W0121 09:50:42.011096 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03afba7c_30b2_42bb_a89e_48e8ca87d82f.slice/crio-a69e4e1b858bd7179a0cb65ab4b11b4faaa0ac475d0068ae81fe382e01b984d7 WatchSource:0}: Error finding container a69e4e1b858bd7179a0cb65ab4b11b4faaa0ac475d0068ae81fe382e01b984d7: Status 404 returned error can't find the container with id a69e4e1b858bd7179a0cb65ab4b11b4faaa0ac475d0068ae81fe382e01b984d7 Jan 21 09:50:42 crc kubenswrapper[5113]: I0121 09:50:42.018255 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:42 crc kubenswrapper[5113]: I0121 09:50:42.203632 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerStarted","Data":"a69e4e1b858bd7179a0cb65ab4b11b4faaa0ac475d0068ae81fe382e01b984d7"} Jan 21 09:50:43 crc kubenswrapper[5113]: I0121 09:50:43.218825 5113 generic.go:358] "Generic (PLEG): container finished" podID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerID="b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63" exitCode=0 Jan 21 09:50:43 crc kubenswrapper[5113]: I0121 09:50:43.218962 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerDied","Data":"b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63"} Jan 21 09:50:44 crc kubenswrapper[5113]: I0121 09:50:44.231964 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerStarted","Data":"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb"} Jan 21 09:50:45 crc kubenswrapper[5113]: I0121 09:50:45.252080 5113 generic.go:358] "Generic (PLEG): container finished" podID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerID="0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb" exitCode=0 Jan 21 09:50:45 crc kubenswrapper[5113]: I0121 09:50:45.252158 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerDied","Data":"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb"} Jan 21 09:50:46 crc kubenswrapper[5113]: I0121 09:50:46.263572 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerStarted","Data":"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994"} Jan 21 09:50:51 crc kubenswrapper[5113]: I0121 09:50:51.505980 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:51 crc kubenswrapper[5113]: I0121 09:50:51.506775 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:51 crc kubenswrapper[5113]: I0121 09:50:51.584084 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:51 crc kubenswrapper[5113]: I0121 09:50:51.618273 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2rds" podStartSLOduration=9.896484301 podStartE2EDuration="10.618250659s" podCreationTimestamp="2026-01-21 09:50:41 +0000 UTC" firstStartedPulling="2026-01-21 09:50:43.220470555 +0000 UTC m=+1972.721297634" lastFinishedPulling="2026-01-21 09:50:43.942236903 +0000 UTC m=+1973.443063992" observedRunningTime="2026-01-21 09:50:46.293710961 +0000 UTC m=+1975.794538050" watchObservedRunningTime="2026-01-21 09:50:51.618250659 +0000 UTC m=+1981.119077718" Jan 21 09:50:52 crc kubenswrapper[5113]: I0121 09:50:52.404914 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:52 crc kubenswrapper[5113]: I0121 09:50:52.468547 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:54 crc kubenswrapper[5113]: I0121 09:50:54.342072 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z2rds" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="registry-server" containerID="cri-o://52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994" gracePeriod=2 Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.079575 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.088131 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.095879 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.138373 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.138663 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq5fg\" (UniqueName: \"kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.138828 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.240647 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.240820 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.240880 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wq5fg\" (UniqueName: \"kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.241290 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.241590 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.255695 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.260391 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq5fg\" (UniqueName: \"kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg\") pod \"certified-operators-mqr2q\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.341356 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content\") pod \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.341454 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkzvg\" (UniqueName: \"kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg\") pod \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.341483 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities\") pod \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\" (UID: \"03afba7c-30b2-42bb-a89e-48e8ca87d82f\") " Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.342869 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities" (OuterVolumeSpecName: "utilities") pod "03afba7c-30b2-42bb-a89e-48e8ca87d82f" (UID: "03afba7c-30b2-42bb-a89e-48e8ca87d82f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.347912 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg" (OuterVolumeSpecName: "kube-api-access-bkzvg") pod "03afba7c-30b2-42bb-a89e-48e8ca87d82f" (UID: "03afba7c-30b2-42bb-a89e-48e8ca87d82f"). InnerVolumeSpecName "kube-api-access-bkzvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.352073 5113 generic.go:358] "Generic (PLEG): container finished" podID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerID="52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994" exitCode=0 Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.352135 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerDied","Data":"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994"} Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.352161 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2rds" event={"ID":"03afba7c-30b2-42bb-a89e-48e8ca87d82f","Type":"ContainerDied","Data":"a69e4e1b858bd7179a0cb65ab4b11b4faaa0ac475d0068ae81fe382e01b984d7"} Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.352176 5113 scope.go:117] "RemoveContainer" containerID="52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.352318 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2rds" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.377795 5113 scope.go:117] "RemoveContainer" containerID="0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.401645 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03afba7c-30b2-42bb-a89e-48e8ca87d82f" (UID: "03afba7c-30b2-42bb-a89e-48e8ca87d82f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.401975 5113 scope.go:117] "RemoveContainer" containerID="b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.413011 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.424067 5113 scope.go:117] "RemoveContainer" containerID="52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994" Jan 21 09:50:55 crc kubenswrapper[5113]: E0121 09:50:55.424488 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994\": container with ID starting with 52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994 not found: ID does not exist" containerID="52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.424520 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994"} err="failed to get container status \"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994\": rpc error: code = NotFound desc = could not find container \"52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994\": container with ID starting with 52b33cf434943deda9be11b5a61089fc169efaf220ade9b6fb3615b7d23a9994 not found: ID does not exist" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.424537 5113 scope.go:117] "RemoveContainer" containerID="0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb" Jan 21 09:50:55 crc kubenswrapper[5113]: E0121 09:50:55.424956 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb\": container with ID starting with 0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb not found: ID does not exist" containerID="0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.424997 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb"} err="failed to get container status \"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb\": rpc error: code = NotFound desc = could not find container \"0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb\": container with ID starting with 0e62ee161ac4fc471f534dca3a0dc9ec407a885f12eb15b5851b1b150a2596bb not found: ID does not exist" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.425025 5113 scope.go:117] "RemoveContainer" containerID="b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63" Jan 21 09:50:55 crc kubenswrapper[5113]: E0121 09:50:55.425241 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63\": container with ID starting with b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63 not found: ID does not exist" containerID="b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.425262 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63"} err="failed to get container status \"b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63\": rpc error: code = NotFound desc = could not find container \"b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63\": container with ID starting with b65915f10edb314142e4c65049b554c6345eb87bd99ac6f9459ad0271fa99c63 not found: ID does not exist" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.443457 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.443493 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bkzvg\" (UniqueName: \"kubernetes.io/projected/03afba7c-30b2-42bb-a89e-48e8ca87d82f-kube-api-access-bkzvg\") on node \"crc\" DevicePath \"\"" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.443507 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03afba7c-30b2-42bb-a89e-48e8ca87d82f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.687390 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.697884 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z2rds"] Jan 21 09:50:55 crc kubenswrapper[5113]: I0121 09:50:55.718983 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:50:55 crc kubenswrapper[5113]: W0121 09:50:55.723127 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23936f32_dcca_49a4_820d_14575dce2354.slice/crio-24505be012e17883c186b309f7451263a8c17926ad5c73bdadee26ee1cfa68dc WatchSource:0}: Error finding container 24505be012e17883c186b309f7451263a8c17926ad5c73bdadee26ee1cfa68dc: Status 404 returned error can't find the container with id 24505be012e17883c186b309f7451263a8c17926ad5c73bdadee26ee1cfa68dc Jan 21 09:50:56 crc kubenswrapper[5113]: I0121 09:50:56.366263 5113 generic.go:358] "Generic (PLEG): container finished" podID="23936f32-dcca-49a4-820d-14575dce2354" containerID="eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d" exitCode=0 Jan 21 09:50:56 crc kubenswrapper[5113]: I0121 09:50:56.366345 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerDied","Data":"eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d"} Jan 21 09:50:56 crc kubenswrapper[5113]: I0121 09:50:56.366703 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerStarted","Data":"24505be012e17883c186b309f7451263a8c17926ad5c73bdadee26ee1cfa68dc"} Jan 21 09:50:56 crc kubenswrapper[5113]: I0121 09:50:56.856066 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" path="/var/lib/kubelet/pods/03afba7c-30b2-42bb-a89e-48e8ca87d82f/volumes" Jan 21 09:50:57 crc kubenswrapper[5113]: I0121 09:50:57.379646 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerStarted","Data":"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67"} Jan 21 09:50:58 crc kubenswrapper[5113]: I0121 09:50:58.339842 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:50:58 crc kubenswrapper[5113]: I0121 09:50:58.339914 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:50:58 crc kubenswrapper[5113]: I0121 09:50:58.391456 5113 generic.go:358] "Generic (PLEG): container finished" podID="23936f32-dcca-49a4-820d-14575dce2354" containerID="57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67" exitCode=0 Jan 21 09:50:58 crc kubenswrapper[5113]: I0121 09:50:58.391686 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerDied","Data":"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67"} Jan 21 09:50:59 crc kubenswrapper[5113]: I0121 09:50:59.404413 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerStarted","Data":"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697"} Jan 21 09:50:59 crc kubenswrapper[5113]: I0121 09:50:59.443315 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mqr2q" podStartSLOduration=3.753301219 podStartE2EDuration="4.443291754s" podCreationTimestamp="2026-01-21 09:50:55 +0000 UTC" firstStartedPulling="2026-01-21 09:50:56.367616249 +0000 UTC m=+1985.868443338" lastFinishedPulling="2026-01-21 09:50:57.057606794 +0000 UTC m=+1986.558433873" observedRunningTime="2026-01-21 09:50:59.430514995 +0000 UTC m=+1988.931342074" watchObservedRunningTime="2026-01-21 09:50:59.443291754 +0000 UTC m=+1988.944118843" Jan 21 09:51:01 crc kubenswrapper[5113]: I0121 09:51:01.209971 5113 scope.go:117] "RemoveContainer" containerID="e5f8f1135b8459a6a467ffdd569f2a2f3fc6b039c5d43e53d4150ccfad5cb9ea" Jan 21 09:51:05 crc kubenswrapper[5113]: I0121 09:51:05.414397 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:05 crc kubenswrapper[5113]: I0121 09:51:05.414961 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:05 crc kubenswrapper[5113]: I0121 09:51:05.491181 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:05 crc kubenswrapper[5113]: I0121 09:51:05.570456 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:05 crc kubenswrapper[5113]: I0121 09:51:05.732109 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:51:07 crc kubenswrapper[5113]: I0121 09:51:07.477241 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mqr2q" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="registry-server" containerID="cri-o://cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697" gracePeriod=2 Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.425985 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.480890 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content\") pod \"23936f32-dcca-49a4-820d-14575dce2354\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.481059 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities\") pod \"23936f32-dcca-49a4-820d-14575dce2354\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.481136 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq5fg\" (UniqueName: \"kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg\") pod \"23936f32-dcca-49a4-820d-14575dce2354\" (UID: \"23936f32-dcca-49a4-820d-14575dce2354\") " Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.482453 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities" (OuterVolumeSpecName: "utilities") pod "23936f32-dcca-49a4-820d-14575dce2354" (UID: "23936f32-dcca-49a4-820d-14575dce2354"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.488131 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg" (OuterVolumeSpecName: "kube-api-access-wq5fg") pod "23936f32-dcca-49a4-820d-14575dce2354" (UID: "23936f32-dcca-49a4-820d-14575dce2354"). InnerVolumeSpecName "kube-api-access-wq5fg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.488803 5113 generic.go:358] "Generic (PLEG): container finished" podID="23936f32-dcca-49a4-820d-14575dce2354" containerID="cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697" exitCode=0 Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.489020 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerDied","Data":"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697"} Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.489057 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mqr2q" event={"ID":"23936f32-dcca-49a4-820d-14575dce2354","Type":"ContainerDied","Data":"24505be012e17883c186b309f7451263a8c17926ad5c73bdadee26ee1cfa68dc"} Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.489079 5113 scope.go:117] "RemoveContainer" containerID="cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.489239 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mqr2q" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.521000 5113 scope.go:117] "RemoveContainer" containerID="57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.531622 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23936f32-dcca-49a4-820d-14575dce2354" (UID: "23936f32-dcca-49a4-820d-14575dce2354"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.545404 5113 scope.go:117] "RemoveContainer" containerID="eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.570070 5113 scope.go:117] "RemoveContainer" containerID="cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697" Jan 21 09:51:08 crc kubenswrapper[5113]: E0121 09:51:08.570894 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697\": container with ID starting with cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697 not found: ID does not exist" containerID="cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.570931 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697"} err="failed to get container status \"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697\": rpc error: code = NotFound desc = could not find container \"cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697\": container with ID starting with cde502c08e96ce88251f9fd308eb352d5626c38cc74926ca7faf7c337f7e0697 not found: ID does not exist" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.570952 5113 scope.go:117] "RemoveContainer" containerID="57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67" Jan 21 09:51:08 crc kubenswrapper[5113]: E0121 09:51:08.571229 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67\": container with ID starting with 57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67 not found: ID does not exist" containerID="57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.571259 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67"} err="failed to get container status \"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67\": rpc error: code = NotFound desc = could not find container \"57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67\": container with ID starting with 57682b02df883cfb8446ae53a4b8581e276b25c07a9493b1a86f6415dbf0af67 not found: ID does not exist" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.571276 5113 scope.go:117] "RemoveContainer" containerID="eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d" Jan 21 09:51:08 crc kubenswrapper[5113]: E0121 09:51:08.571639 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d\": container with ID starting with eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d not found: ID does not exist" containerID="eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.571667 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d"} err="failed to get container status \"eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d\": rpc error: code = NotFound desc = could not find container \"eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d\": container with ID starting with eaea0eb161034a2d3ef97c6fd99d6d9d01ca2676fcbdafbf1f05e12e4b33b33d not found: ID does not exist" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.582438 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.582460 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23936f32-dcca-49a4-820d-14575dce2354-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.582470 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wq5fg\" (UniqueName: \"kubernetes.io/projected/23936f32-dcca-49a4-820d-14575dce2354-kube-api-access-wq5fg\") on node \"crc\" DevicePath \"\"" Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.848073 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:51:08 crc kubenswrapper[5113]: I0121 09:51:08.863402 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mqr2q"] Jan 21 09:51:10 crc kubenswrapper[5113]: I0121 09:51:10.859858 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23936f32-dcca-49a4-820d-14575dce2354" path="/var/lib/kubelet/pods/23936f32-dcca-49a4-820d-14575dce2354/volumes" Jan 21 09:51:28 crc kubenswrapper[5113]: I0121 09:51:28.340419 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:51:28 crc kubenswrapper[5113]: I0121 09:51:28.341193 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:51:58 crc kubenswrapper[5113]: I0121 09:51:58.339793 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:51:58 crc kubenswrapper[5113]: I0121 09:51:58.340507 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:51:58 crc kubenswrapper[5113]: I0121 09:51:58.340581 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:51:58 crc kubenswrapper[5113]: I0121 09:51:58.341391 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:51:58 crc kubenswrapper[5113]: I0121 09:51:58.341481 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e" gracePeriod=600 Jan 21 09:51:59 crc kubenswrapper[5113]: I0121 09:51:59.034445 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e" exitCode=0 Jan 21 09:51:59 crc kubenswrapper[5113]: I0121 09:51:59.034531 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e"} Jan 21 09:51:59 crc kubenswrapper[5113]: I0121 09:51:59.034939 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658"} Jan 21 09:51:59 crc kubenswrapper[5113]: I0121 09:51:59.034962 5113 scope.go:117] "RemoveContainer" containerID="323c269f606c89f54b9cd8eea875f977746d25c6a01c2f3a0d008ec5a4e7b9bf" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.061429 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063165 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="extract-utilities" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063192 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="extract-utilities" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063222 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063236 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063274 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="extract-utilities" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063288 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="extract-utilities" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063323 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="extract-content" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063336 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="extract-content" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063379 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="extract-content" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063392 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="extract-content" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063413 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063425 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063638 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="03afba7c-30b2-42bb-a89e-48e8ca87d82f" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.063670 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="23936f32-dcca-49a4-820d-14575dce2354" containerName="registry-server" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.070885 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.086704 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.133109 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5lx\" (UniqueName: \"kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.133189 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.133286 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.150409 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483152-nwxhr"] Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.162673 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.175631 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.176326 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.179624 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483152-nwxhr"] Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.180109 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.235568 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bq5lx\" (UniqueName: \"kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.235622 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmzhk\" (UniqueName: \"kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk\") pod \"auto-csr-approver-29483152-nwxhr\" (UID: \"66d59865-2ed1-4301-8297-aeaff69d829b\") " pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.235648 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.235688 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.236160 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.237079 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.258887 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq5lx\" (UniqueName: \"kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx\") pod \"redhat-operators-dx7gz\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.336489 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nmzhk\" (UniqueName: \"kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk\") pod \"auto-csr-approver-29483152-nwxhr\" (UID: \"66d59865-2ed1-4301-8297-aeaff69d829b\") " pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.357665 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmzhk\" (UniqueName: \"kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk\") pod \"auto-csr-approver-29483152-nwxhr\" (UID: \"66d59865-2ed1-4301-8297-aeaff69d829b\") " pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.394627 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.492522 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.639039 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:00 crc kubenswrapper[5113]: I0121 09:52:00.721613 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483152-nwxhr"] Jan 21 09:52:00 crc kubenswrapper[5113]: W0121 09:52:00.730297 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d59865_2ed1_4301_8297_aeaff69d829b.slice/crio-0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd WatchSource:0}: Error finding container 0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd: Status 404 returned error can't find the container with id 0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd Jan 21 09:52:01 crc kubenswrapper[5113]: I0121 09:52:01.060993 5113 generic.go:358] "Generic (PLEG): container finished" podID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerID="8c96dd5cfcbf342f5d4582cbab8de7983605f4afaf88bba4970fa62330ee85a6" exitCode=0 Jan 21 09:52:01 crc kubenswrapper[5113]: I0121 09:52:01.061106 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerDied","Data":"8c96dd5cfcbf342f5d4582cbab8de7983605f4afaf88bba4970fa62330ee85a6"} Jan 21 09:52:01 crc kubenswrapper[5113]: I0121 09:52:01.061468 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerStarted","Data":"046f1a3f36e4d3166f592be6a238875b48d052f1c7614774d48bf0fdddcf7ec5"} Jan 21 09:52:01 crc kubenswrapper[5113]: I0121 09:52:01.064196 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" event={"ID":"66d59865-2ed1-4301-8297-aeaff69d829b","Type":"ContainerStarted","Data":"0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd"} Jan 21 09:52:02 crc kubenswrapper[5113]: I0121 09:52:02.078864 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerStarted","Data":"bae6f5c0432633cca1acfc23e396754775037a4d8113503c6cc2ee4c6fcbacb2"} Jan 21 09:52:02 crc kubenswrapper[5113]: I0121 09:52:02.080467 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" event={"ID":"66d59865-2ed1-4301-8297-aeaff69d829b","Type":"ContainerStarted","Data":"db54d8da278cd766c4b5bc6ecebf204f86d3da5a1cf1a91b6be838785785917a"} Jan 21 09:52:02 crc kubenswrapper[5113]: I0121 09:52:02.113952 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" podStartSLOduration=1.241855224 podStartE2EDuration="2.113935728s" podCreationTimestamp="2026-01-21 09:52:00 +0000 UTC" firstStartedPulling="2026-01-21 09:52:00.7329549 +0000 UTC m=+2050.233781949" lastFinishedPulling="2026-01-21 09:52:01.605035404 +0000 UTC m=+2051.105862453" observedRunningTime="2026-01-21 09:52:02.113333601 +0000 UTC m=+2051.614160690" watchObservedRunningTime="2026-01-21 09:52:02.113935728 +0000 UTC m=+2051.614762777" Jan 21 09:52:03 crc kubenswrapper[5113]: I0121 09:52:03.095278 5113 generic.go:358] "Generic (PLEG): container finished" podID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerID="bae6f5c0432633cca1acfc23e396754775037a4d8113503c6cc2ee4c6fcbacb2" exitCode=0 Jan 21 09:52:03 crc kubenswrapper[5113]: I0121 09:52:03.095387 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerDied","Data":"bae6f5c0432633cca1acfc23e396754775037a4d8113503c6cc2ee4c6fcbacb2"} Jan 21 09:52:03 crc kubenswrapper[5113]: I0121 09:52:03.098601 5113 generic.go:358] "Generic (PLEG): container finished" podID="66d59865-2ed1-4301-8297-aeaff69d829b" containerID="db54d8da278cd766c4b5bc6ecebf204f86d3da5a1cf1a91b6be838785785917a" exitCode=0 Jan 21 09:52:03 crc kubenswrapper[5113]: I0121 09:52:03.098896 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" event={"ID":"66d59865-2ed1-4301-8297-aeaff69d829b","Type":"ContainerDied","Data":"db54d8da278cd766c4b5bc6ecebf204f86d3da5a1cf1a91b6be838785785917a"} Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.110665 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerStarted","Data":"29220555a9e602727fcc8a939359ce33cc32f9ac919c02633331e67a38a65645"} Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.145141 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dx7gz" podStartSLOduration=3.393667691 podStartE2EDuration="4.145117933s" podCreationTimestamp="2026-01-21 09:52:00 +0000 UTC" firstStartedPulling="2026-01-21 09:52:01.061985309 +0000 UTC m=+2050.562812358" lastFinishedPulling="2026-01-21 09:52:01.813435541 +0000 UTC m=+2051.314262600" observedRunningTime="2026-01-21 09:52:04.139140335 +0000 UTC m=+2053.639967414" watchObservedRunningTime="2026-01-21 09:52:04.145117933 +0000 UTC m=+2053.645945012" Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.409414 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.506425 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmzhk\" (UniqueName: \"kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk\") pod \"66d59865-2ed1-4301-8297-aeaff69d829b\" (UID: \"66d59865-2ed1-4301-8297-aeaff69d829b\") " Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.512528 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk" (OuterVolumeSpecName: "kube-api-access-nmzhk") pod "66d59865-2ed1-4301-8297-aeaff69d829b" (UID: "66d59865-2ed1-4301-8297-aeaff69d829b"). InnerVolumeSpecName "kube-api-access-nmzhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:52:04 crc kubenswrapper[5113]: I0121 09:52:04.609342 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmzhk\" (UniqueName: \"kubernetes.io/projected/66d59865-2ed1-4301-8297-aeaff69d829b-kube-api-access-nmzhk\") on node \"crc\" DevicePath \"\"" Jan 21 09:52:05 crc kubenswrapper[5113]: I0121 09:52:05.118722 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" Jan 21 09:52:05 crc kubenswrapper[5113]: I0121 09:52:05.118835 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483152-nwxhr" event={"ID":"66d59865-2ed1-4301-8297-aeaff69d829b","Type":"ContainerDied","Data":"0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd"} Jan 21 09:52:05 crc kubenswrapper[5113]: I0121 09:52:05.118869 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8260b9f11089a502cd65995e81e899671928abcd224d7d641e3317e0b59cbd" Jan 21 09:52:05 crc kubenswrapper[5113]: I0121 09:52:05.181643 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483146-xsgkz"] Jan 21 09:52:05 crc kubenswrapper[5113]: I0121 09:52:05.187338 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483146-xsgkz"] Jan 21 09:52:06 crc kubenswrapper[5113]: I0121 09:52:06.858576 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e17b1571-e260-4edb-b107-3ae3ac7357e2" path="/var/lib/kubelet/pods/e17b1571-e260-4edb-b107-3ae3ac7357e2/volumes" Jan 21 09:52:10 crc kubenswrapper[5113]: I0121 09:52:10.395423 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:10 crc kubenswrapper[5113]: I0121 09:52:10.396046 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:10 crc kubenswrapper[5113]: I0121 09:52:10.459690 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:11 crc kubenswrapper[5113]: I0121 09:52:11.266376 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:11 crc kubenswrapper[5113]: I0121 09:52:11.323271 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:13 crc kubenswrapper[5113]: I0121 09:52:13.215400 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dx7gz" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="registry-server" containerID="cri-o://29220555a9e602727fcc8a939359ce33cc32f9ac919c02633331e67a38a65645" gracePeriod=2 Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.240475 5113 generic.go:358] "Generic (PLEG): container finished" podID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerID="29220555a9e602727fcc8a939359ce33cc32f9ac919c02633331e67a38a65645" exitCode=0 Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.241191 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerDied","Data":"29220555a9e602727fcc8a939359ce33cc32f9ac919c02633331e67a38a65645"} Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.533253 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.620444 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities\") pod \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.622079 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content\") pod \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.621995 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities" (OuterVolumeSpecName: "utilities") pod "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" (UID: "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.622442 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq5lx\" (UniqueName: \"kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx\") pod \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\" (UID: \"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2\") " Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.623192 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.633424 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx" (OuterVolumeSpecName: "kube-api-access-bq5lx") pod "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" (UID: "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2"). InnerVolumeSpecName "kube-api-access-bq5lx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.724549 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bq5lx\" (UniqueName: \"kubernetes.io/projected/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-kube-api-access-bq5lx\") on node \"crc\" DevicePath \"\"" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.757003 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" (UID: "3d720e05-22f6-4a99-a3c8-10f4cbfe3da2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 09:52:15 crc kubenswrapper[5113]: I0121 09:52:15.826420 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.249348 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx7gz" event={"ID":"3d720e05-22f6-4a99-a3c8-10f4cbfe3da2","Type":"ContainerDied","Data":"046f1a3f36e4d3166f592be6a238875b48d052f1c7614774d48bf0fdddcf7ec5"} Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.249752 5113 scope.go:117] "RemoveContainer" containerID="29220555a9e602727fcc8a939359ce33cc32f9ac919c02633331e67a38a65645" Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.249935 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx7gz" Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.280141 5113 scope.go:117] "RemoveContainer" containerID="bae6f5c0432633cca1acfc23e396754775037a4d8113503c6cc2ee4c6fcbacb2" Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.290281 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.303257 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dx7gz"] Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.322631 5113 scope.go:117] "RemoveContainer" containerID="8c96dd5cfcbf342f5d4582cbab8de7983605f4afaf88bba4970fa62330ee85a6" Jan 21 09:52:16 crc kubenswrapper[5113]: I0121 09:52:16.869837 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" path="/var/lib/kubelet/pods/3d720e05-22f6-4a99-a3c8-10f4cbfe3da2/volumes" Jan 21 09:52:51 crc kubenswrapper[5113]: I0121 09:52:51.825346 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:52:51 crc kubenswrapper[5113]: I0121 09:52:51.847218 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:52:51 crc kubenswrapper[5113]: I0121 09:52:51.851633 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:52:51 crc kubenswrapper[5113]: I0121 09:52:51.859209 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:53:01 crc kubenswrapper[5113]: I0121 09:53:01.423997 5113 scope.go:117] "RemoveContainer" containerID="4f257d0ef81b2106ec94f9756024c20169dee6f7885515d978c95b7be345001c" Jan 21 09:53:58 crc kubenswrapper[5113]: I0121 09:53:58.340558 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:53:58 crc kubenswrapper[5113]: I0121 09:53:58.341289 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.158793 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483154-6jr59"] Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161031 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="extract-utilities" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161068 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="extract-utilities" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161086 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="extract-content" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161099 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="extract-content" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161131 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66d59865-2ed1-4301-8297-aeaff69d829b" containerName="oc" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161145 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d59865-2ed1-4301-8297-aeaff69d829b" containerName="oc" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161204 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="registry-server" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161216 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="registry-server" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161404 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="3d720e05-22f6-4a99-a3c8-10f4cbfe3da2" containerName="registry-server" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.161440 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="66d59865-2ed1-4301-8297-aeaff69d829b" containerName="oc" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.178824 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483154-6jr59"] Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.181644 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.185001 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.187324 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.189972 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.357127 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hrq\" (UniqueName: \"kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq\") pod \"auto-csr-approver-29483154-6jr59\" (UID: \"41e686df-fede-4589-a65e-8699310c39dd\") " pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.458941 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b8hrq\" (UniqueName: \"kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq\") pod \"auto-csr-approver-29483154-6jr59\" (UID: \"41e686df-fede-4589-a65e-8699310c39dd\") " pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.490690 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8hrq\" (UniqueName: \"kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq\") pod \"auto-csr-approver-29483154-6jr59\" (UID: \"41e686df-fede-4589-a65e-8699310c39dd\") " pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.518418 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:00 crc kubenswrapper[5113]: I0121 09:54:00.732945 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483154-6jr59"] Jan 21 09:54:01 crc kubenswrapper[5113]: I0121 09:54:01.364783 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483154-6jr59" event={"ID":"41e686df-fede-4589-a65e-8699310c39dd","Type":"ContainerStarted","Data":"df4745bd49ae7d0140dd6c31509972850d08aa5a4436c69a95b19e65508429b9"} Jan 21 09:54:03 crc kubenswrapper[5113]: I0121 09:54:03.387148 5113 generic.go:358] "Generic (PLEG): container finished" podID="41e686df-fede-4589-a65e-8699310c39dd" containerID="e097c79ed23da7c83aa3c2c72e0ae39c4a830aa4fc2d2374d9207b9103c3526e" exitCode=0 Jan 21 09:54:03 crc kubenswrapper[5113]: I0121 09:54:03.387356 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483154-6jr59" event={"ID":"41e686df-fede-4589-a65e-8699310c39dd","Type":"ContainerDied","Data":"e097c79ed23da7c83aa3c2c72e0ae39c4a830aa4fc2d2374d9207b9103c3526e"} Jan 21 09:54:04 crc kubenswrapper[5113]: I0121 09:54:04.684834 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:04 crc kubenswrapper[5113]: I0121 09:54:04.738925 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8hrq\" (UniqueName: \"kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq\") pod \"41e686df-fede-4589-a65e-8699310c39dd\" (UID: \"41e686df-fede-4589-a65e-8699310c39dd\") " Jan 21 09:54:04 crc kubenswrapper[5113]: I0121 09:54:04.744763 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq" (OuterVolumeSpecName: "kube-api-access-b8hrq") pod "41e686df-fede-4589-a65e-8699310c39dd" (UID: "41e686df-fede-4589-a65e-8699310c39dd"). InnerVolumeSpecName "kube-api-access-b8hrq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:54:04 crc kubenswrapper[5113]: I0121 09:54:04.840067 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b8hrq\" (UniqueName: \"kubernetes.io/projected/41e686df-fede-4589-a65e-8699310c39dd-kube-api-access-b8hrq\") on node \"crc\" DevicePath \"\"" Jan 21 09:54:05 crc kubenswrapper[5113]: I0121 09:54:05.406729 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483154-6jr59" event={"ID":"41e686df-fede-4589-a65e-8699310c39dd","Type":"ContainerDied","Data":"df4745bd49ae7d0140dd6c31509972850d08aa5a4436c69a95b19e65508429b9"} Jan 21 09:54:05 crc kubenswrapper[5113]: I0121 09:54:05.406822 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df4745bd49ae7d0140dd6c31509972850d08aa5a4436c69a95b19e65508429b9" Jan 21 09:54:05 crc kubenswrapper[5113]: I0121 09:54:05.406775 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483154-6jr59" Jan 21 09:54:05 crc kubenswrapper[5113]: I0121 09:54:05.769239 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483148-bggxm"] Jan 21 09:54:05 crc kubenswrapper[5113]: I0121 09:54:05.783170 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483148-bggxm"] Jan 21 09:54:06 crc kubenswrapper[5113]: I0121 09:54:06.857618 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b434b8bd-02e6-4c4a-aff6-e8f529de05ff" path="/var/lib/kubelet/pods/b434b8bd-02e6-4c4a-aff6-e8f529de05ff/volumes" Jan 21 09:54:28 crc kubenswrapper[5113]: I0121 09:54:28.340306 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:54:28 crc kubenswrapper[5113]: I0121 09:54:28.341080 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:54:58 crc kubenswrapper[5113]: I0121 09:54:58.340320 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 09:54:58 crc kubenswrapper[5113]: I0121 09:54:58.343316 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 09:54:58 crc kubenswrapper[5113]: I0121 09:54:58.343768 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 09:54:58 crc kubenswrapper[5113]: I0121 09:54:58.345051 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 09:54:58 crc kubenswrapper[5113]: I0121 09:54:58.345342 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" gracePeriod=600 Jan 21 09:54:58 crc kubenswrapper[5113]: E0121 09:54:58.484993 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:54:59 crc kubenswrapper[5113]: I0121 09:54:59.109315 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" exitCode=0 Jan 21 09:54:59 crc kubenswrapper[5113]: I0121 09:54:59.109431 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658"} Jan 21 09:54:59 crc kubenswrapper[5113]: I0121 09:54:59.109503 5113 scope.go:117] "RemoveContainer" containerID="afea96e7d7be0a9ce1037a9bf6c1433e3d2f7057dec31c29f9092fc74f1ea96e" Jan 21 09:54:59 crc kubenswrapper[5113]: I0121 09:54:59.110409 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:54:59 crc kubenswrapper[5113]: E0121 09:54:59.111037 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:55:01 crc kubenswrapper[5113]: I0121 09:55:01.611583 5113 scope.go:117] "RemoveContainer" containerID="a3d09dbe8d42bf5b6b2fce9d29bdd36482604f5e13c9fbe5c8259d8de384ff1e" Jan 21 09:55:13 crc kubenswrapper[5113]: I0121 09:55:13.844935 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:55:13 crc kubenswrapper[5113]: E0121 09:55:13.846022 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:55:27 crc kubenswrapper[5113]: I0121 09:55:27.843518 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:55:27 crc kubenswrapper[5113]: E0121 09:55:27.844617 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:55:42 crc kubenswrapper[5113]: I0121 09:55:42.845115 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:55:42 crc kubenswrapper[5113]: E0121 09:55:42.846190 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:55:53 crc kubenswrapper[5113]: I0121 09:55:53.843328 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:55:53 crc kubenswrapper[5113]: E0121 09:55:53.844307 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.147983 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483156-2j2xm"] Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.150916 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="41e686df-fede-4589-a65e-8699310c39dd" containerName="oc" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.150944 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e686df-fede-4589-a65e-8699310c39dd" containerName="oc" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.151167 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="41e686df-fede-4589-a65e-8699310c39dd" containerName="oc" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.165220 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483156-2j2xm"] Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.165392 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.168537 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.168865 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.172123 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.232678 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zsvc\" (UniqueName: \"kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc\") pod \"auto-csr-approver-29483156-2j2xm\" (UID: \"706c9d74-2b42-4584-b005-ba8c14f26de0\") " pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.333771 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zsvc\" (UniqueName: \"kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc\") pod \"auto-csr-approver-29483156-2j2xm\" (UID: \"706c9d74-2b42-4584-b005-ba8c14f26de0\") " pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.387489 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zsvc\" (UniqueName: \"kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc\") pod \"auto-csr-approver-29483156-2j2xm\" (UID: \"706c9d74-2b42-4584-b005-ba8c14f26de0\") " pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.487769 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.837485 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483156-2j2xm"] Jan 21 09:56:00 crc kubenswrapper[5113]: W0121 09:56:00.875607 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod706c9d74_2b42_4584_b005_ba8c14f26de0.slice/crio-39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff WatchSource:0}: Error finding container 39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff: Status 404 returned error can't find the container with id 39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.877493 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 09:56:00 crc kubenswrapper[5113]: I0121 09:56:00.932700 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" event={"ID":"706c9d74-2b42-4584-b005-ba8c14f26de0","Type":"ContainerStarted","Data":"39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff"} Jan 21 09:56:04 crc kubenswrapper[5113]: I0121 09:56:04.964501 5113 generic.go:358] "Generic (PLEG): container finished" podID="706c9d74-2b42-4584-b005-ba8c14f26de0" containerID="fef0c9e8bcfa9ab8da849b5df96ff7aea18143093c7bf251a681efd14d3ea0d6" exitCode=0 Jan 21 09:56:04 crc kubenswrapper[5113]: I0121 09:56:04.964649 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" event={"ID":"706c9d74-2b42-4584-b005-ba8c14f26de0","Type":"ContainerDied","Data":"fef0c9e8bcfa9ab8da849b5df96ff7aea18143093c7bf251a681efd14d3ea0d6"} Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.298309 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.438816 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zsvc\" (UniqueName: \"kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc\") pod \"706c9d74-2b42-4584-b005-ba8c14f26de0\" (UID: \"706c9d74-2b42-4584-b005-ba8c14f26de0\") " Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.448108 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc" (OuterVolumeSpecName: "kube-api-access-7zsvc") pod "706c9d74-2b42-4584-b005-ba8c14f26de0" (UID: "706c9d74-2b42-4584-b005-ba8c14f26de0"). InnerVolumeSpecName "kube-api-access-7zsvc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.540781 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zsvc\" (UniqueName: \"kubernetes.io/projected/706c9d74-2b42-4584-b005-ba8c14f26de0-kube-api-access-7zsvc\") on node \"crc\" DevicePath \"\"" Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.842870 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:56:06 crc kubenswrapper[5113]: E0121 09:56:06.843150 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.989000 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" event={"ID":"706c9d74-2b42-4584-b005-ba8c14f26de0","Type":"ContainerDied","Data":"39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff"} Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.989046 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483156-2j2xm" Jan 21 09:56:06 crc kubenswrapper[5113]: I0121 09:56:06.989063 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c8a886494e3b751efdaed7af9ffe3629956cb45699bebed97400a0b4c120ff" Jan 21 09:56:07 crc kubenswrapper[5113]: I0121 09:56:07.379239 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483150-zzq47"] Jan 21 09:56:07 crc kubenswrapper[5113]: I0121 09:56:07.388770 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483150-zzq47"] Jan 21 09:56:08 crc kubenswrapper[5113]: I0121 09:56:08.857939 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac63cd0-23a9-4405-a824-ac30e6ca8192" path="/var/lib/kubelet/pods/bac63cd0-23a9-4405-a824-ac30e6ca8192/volumes" Jan 21 09:56:20 crc kubenswrapper[5113]: I0121 09:56:20.857526 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:56:20 crc kubenswrapper[5113]: E0121 09:56:20.858651 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:56:35 crc kubenswrapper[5113]: I0121 09:56:35.843839 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:56:35 crc kubenswrapper[5113]: E0121 09:56:35.844661 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:56:46 crc kubenswrapper[5113]: I0121 09:56:46.844417 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:56:46 crc kubenswrapper[5113]: E0121 09:56:46.845854 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:57:00 crc kubenswrapper[5113]: I0121 09:57:00.856079 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:57:00 crc kubenswrapper[5113]: E0121 09:57:00.857255 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:57:01 crc kubenswrapper[5113]: I0121 09:57:01.773465 5113 scope.go:117] "RemoveContainer" containerID="40e0ff9ef8810f222043a364e172129782229aa013bace6f05119ed7f7f22a8c" Jan 21 09:57:14 crc kubenswrapper[5113]: I0121 09:57:14.846262 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:57:14 crc kubenswrapper[5113]: E0121 09:57:14.847404 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:57:27 crc kubenswrapper[5113]: I0121 09:57:27.844915 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:57:27 crc kubenswrapper[5113]: E0121 09:57:27.846370 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:57:42 crc kubenswrapper[5113]: I0121 09:57:42.844678 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:57:42 crc kubenswrapper[5113]: E0121 09:57:42.846044 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:57:51 crc kubenswrapper[5113]: I0121 09:57:51.960335 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:57:51 crc kubenswrapper[5113]: I0121 09:57:51.967476 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 09:57:51 crc kubenswrapper[5113]: I0121 09:57:51.973247 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:57:51 crc kubenswrapper[5113]: I0121 09:57:51.979874 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 09:57:54 crc kubenswrapper[5113]: I0121 09:57:54.843539 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:57:54 crc kubenswrapper[5113]: E0121 09:57:54.844022 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.160769 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483158-mfgcx"] Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.165220 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="706c9d74-2b42-4584-b005-ba8c14f26de0" containerName="oc" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.165511 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="706c9d74-2b42-4584-b005-ba8c14f26de0" containerName="oc" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.165983 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="706c9d74-2b42-4584-b005-ba8c14f26de0" containerName="oc" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.196576 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483158-mfgcx"] Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.196876 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.200547 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.201200 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.201422 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.339522 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r8hh\" (UniqueName: \"kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh\") pod \"auto-csr-approver-29483158-mfgcx\" (UID: \"0104258e-ab5b-4f01-bd81-773277032f6a\") " pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.442256 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9r8hh\" (UniqueName: \"kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh\") pod \"auto-csr-approver-29483158-mfgcx\" (UID: \"0104258e-ab5b-4f01-bd81-773277032f6a\") " pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.486509 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r8hh\" (UniqueName: \"kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh\") pod \"auto-csr-approver-29483158-mfgcx\" (UID: \"0104258e-ab5b-4f01-bd81-773277032f6a\") " pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.533949 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:00 crc kubenswrapper[5113]: I0121 09:58:00.861138 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483158-mfgcx"] Jan 21 09:58:01 crc kubenswrapper[5113]: I0121 09:58:01.128614 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" event={"ID":"0104258e-ab5b-4f01-bd81-773277032f6a","Type":"ContainerStarted","Data":"deaa7b76ebe3bbb3be3ac2d50235fafdf83633f92637e814b67a53d8d05b392e"} Jan 21 09:58:02 crc kubenswrapper[5113]: I0121 09:58:02.148260 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" event={"ID":"0104258e-ab5b-4f01-bd81-773277032f6a","Type":"ContainerStarted","Data":"af697446c91f69917c6620eb7608200ae1e1a13650e6e32eef2f9740457da2a2"} Jan 21 09:58:02 crc kubenswrapper[5113]: I0121 09:58:02.171916 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" podStartSLOduration=1.289139937 podStartE2EDuration="2.171900234s" podCreationTimestamp="2026-01-21 09:58:00 +0000 UTC" firstStartedPulling="2026-01-21 09:58:00.856368449 +0000 UTC m=+2410.357195498" lastFinishedPulling="2026-01-21 09:58:01.739128716 +0000 UTC m=+2411.239955795" observedRunningTime="2026-01-21 09:58:02.163870526 +0000 UTC m=+2411.664697575" watchObservedRunningTime="2026-01-21 09:58:02.171900234 +0000 UTC m=+2411.672727283" Jan 21 09:58:03 crc kubenswrapper[5113]: I0121 09:58:03.157326 5113 generic.go:358] "Generic (PLEG): container finished" podID="0104258e-ab5b-4f01-bd81-773277032f6a" containerID="af697446c91f69917c6620eb7608200ae1e1a13650e6e32eef2f9740457da2a2" exitCode=0 Jan 21 09:58:03 crc kubenswrapper[5113]: I0121 09:58:03.157393 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" event={"ID":"0104258e-ab5b-4f01-bd81-773277032f6a","Type":"ContainerDied","Data":"af697446c91f69917c6620eb7608200ae1e1a13650e6e32eef2f9740457da2a2"} Jan 21 09:58:04 crc kubenswrapper[5113]: I0121 09:58:04.511578 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:04 crc kubenswrapper[5113]: I0121 09:58:04.610229 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r8hh\" (UniqueName: \"kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh\") pod \"0104258e-ab5b-4f01-bd81-773277032f6a\" (UID: \"0104258e-ab5b-4f01-bd81-773277032f6a\") " Jan 21 09:58:04 crc kubenswrapper[5113]: I0121 09:58:04.616194 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh" (OuterVolumeSpecName: "kube-api-access-9r8hh") pod "0104258e-ab5b-4f01-bd81-773277032f6a" (UID: "0104258e-ab5b-4f01-bd81-773277032f6a"). InnerVolumeSpecName "kube-api-access-9r8hh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 09:58:04 crc kubenswrapper[5113]: I0121 09:58:04.711477 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9r8hh\" (UniqueName: \"kubernetes.io/projected/0104258e-ab5b-4f01-bd81-773277032f6a-kube-api-access-9r8hh\") on node \"crc\" DevicePath \"\"" Jan 21 09:58:05 crc kubenswrapper[5113]: I0121 09:58:05.176876 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" Jan 21 09:58:05 crc kubenswrapper[5113]: I0121 09:58:05.176888 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483158-mfgcx" event={"ID":"0104258e-ab5b-4f01-bd81-773277032f6a","Type":"ContainerDied","Data":"deaa7b76ebe3bbb3be3ac2d50235fafdf83633f92637e814b67a53d8d05b392e"} Jan 21 09:58:05 crc kubenswrapper[5113]: I0121 09:58:05.177059 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deaa7b76ebe3bbb3be3ac2d50235fafdf83633f92637e814b67a53d8d05b392e" Jan 21 09:58:05 crc kubenswrapper[5113]: I0121 09:58:05.237260 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483152-nwxhr"] Jan 21 09:58:05 crc kubenswrapper[5113]: I0121 09:58:05.243531 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483152-nwxhr"] Jan 21 09:58:06 crc kubenswrapper[5113]: I0121 09:58:06.865761 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d59865-2ed1-4301-8297-aeaff69d829b" path="/var/lib/kubelet/pods/66d59865-2ed1-4301-8297-aeaff69d829b/volumes" Jan 21 09:58:09 crc kubenswrapper[5113]: I0121 09:58:09.844658 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:58:09 crc kubenswrapper[5113]: E0121 09:58:09.845410 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:58:24 crc kubenswrapper[5113]: I0121 09:58:24.845777 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:58:24 crc kubenswrapper[5113]: E0121 09:58:24.846829 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:58:37 crc kubenswrapper[5113]: I0121 09:58:37.843117 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:58:37 crc kubenswrapper[5113]: E0121 09:58:37.845830 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:58:49 crc kubenswrapper[5113]: I0121 09:58:49.843809 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:58:49 crc kubenswrapper[5113]: E0121 09:58:49.844872 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:59:00 crc kubenswrapper[5113]: I0121 09:59:00.854874 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:59:00 crc kubenswrapper[5113]: E0121 09:59:00.856391 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:59:01 crc kubenswrapper[5113]: I0121 09:59:01.949518 5113 scope.go:117] "RemoveContainer" containerID="db54d8da278cd766c4b5bc6ecebf204f86d3da5a1cf1a91b6be838785785917a" Jan 21 09:59:11 crc kubenswrapper[5113]: I0121 09:59:11.844691 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:59:11 crc kubenswrapper[5113]: E0121 09:59:11.846061 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:59:26 crc kubenswrapper[5113]: I0121 09:59:26.844290 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:59:26 crc kubenswrapper[5113]: E0121 09:59:26.845417 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:59:39 crc kubenswrapper[5113]: I0121 09:59:39.844658 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:59:39 crc kubenswrapper[5113]: E0121 09:59:39.845800 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 09:59:52 crc kubenswrapper[5113]: I0121 09:59:52.843901 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 09:59:52 crc kubenswrapper[5113]: E0121 09:59:52.844809 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.146709 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483160-ls6tr"] Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.149184 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0104258e-ab5b-4f01-bd81-773277032f6a" containerName="oc" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.149286 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0104258e-ab5b-4f01-bd81-773277032f6a" containerName="oc" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.149848 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="0104258e-ab5b-4f01-bd81-773277032f6a" containerName="oc" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.161175 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.161029 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp"] Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.164124 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.164124 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.164988 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.167759 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-ls6tr"] Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.167980 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp"] Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.168098 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.171029 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.171029 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.264023 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4t9\" (UniqueName: \"kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.264212 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2spql\" (UniqueName: \"kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql\") pod \"auto-csr-approver-29483160-ls6tr\" (UID: \"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3\") " pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.264291 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.264344 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.366375 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zj4t9\" (UniqueName: \"kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.366476 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2spql\" (UniqueName: \"kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql\") pod \"auto-csr-approver-29483160-ls6tr\" (UID: \"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3\") " pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.366554 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.366860 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.368140 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.388029 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj4t9\" (UniqueName: \"kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.390199 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume\") pod \"collect-profiles-29483160-jbbrp\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.390504 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2spql\" (UniqueName: \"kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql\") pod \"auto-csr-approver-29483160-ls6tr\" (UID: \"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3\") " pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.492256 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.516486 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.729706 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-ls6tr"] Jan 21 10:00:00 crc kubenswrapper[5113]: I0121 10:00:00.989473 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp"] Jan 21 10:00:01 crc kubenswrapper[5113]: W0121 10:00:01.000436 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bb42291_5675_4a1c_b7be_12e9f368a94d.slice/crio-ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4 WatchSource:0}: Error finding container ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4: Status 404 returned error can't find the container with id ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4 Jan 21 10:00:01 crc kubenswrapper[5113]: I0121 10:00:01.396801 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" event={"ID":"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3","Type":"ContainerStarted","Data":"736de068b271b60d29eb4ef2d043921e0107f8ac77eb9f40202b5c4e3248f83d"} Jan 21 10:00:01 crc kubenswrapper[5113]: I0121 10:00:01.398464 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" event={"ID":"2bb42291-5675-4a1c-b7be-12e9f368a94d","Type":"ContainerStarted","Data":"3765f7a83ebedb8fc32532ca77f4982efe2c1ed130c4d669876784458ea544de"} Jan 21 10:00:01 crc kubenswrapper[5113]: I0121 10:00:01.398662 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" event={"ID":"2bb42291-5675-4a1c-b7be-12e9f368a94d","Type":"ContainerStarted","Data":"ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4"} Jan 21 10:00:01 crc kubenswrapper[5113]: I0121 10:00:01.418140 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" podStartSLOduration=1.418120096 podStartE2EDuration="1.418120096s" podCreationTimestamp="2026-01-21 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:00:01.418019813 +0000 UTC m=+2530.918846912" watchObservedRunningTime="2026-01-21 10:00:01.418120096 +0000 UTC m=+2530.918947145" Jan 21 10:00:02 crc kubenswrapper[5113]: I0121 10:00:02.410635 5113 generic.go:358] "Generic (PLEG): container finished" podID="2bb42291-5675-4a1c-b7be-12e9f368a94d" containerID="3765f7a83ebedb8fc32532ca77f4982efe2c1ed130c4d669876784458ea544de" exitCode=0 Jan 21 10:00:02 crc kubenswrapper[5113]: I0121 10:00:02.410724 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" event={"ID":"2bb42291-5675-4a1c-b7be-12e9f368a94d","Type":"ContainerDied","Data":"3765f7a83ebedb8fc32532ca77f4982efe2c1ed130c4d669876784458ea544de"} Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.698493 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.832004 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume\") pod \"2bb42291-5675-4a1c-b7be-12e9f368a94d\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.832146 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj4t9\" (UniqueName: \"kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9\") pod \"2bb42291-5675-4a1c-b7be-12e9f368a94d\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.832256 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume\") pod \"2bb42291-5675-4a1c-b7be-12e9f368a94d\" (UID: \"2bb42291-5675-4a1c-b7be-12e9f368a94d\") " Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.832999 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bb42291-5675-4a1c-b7be-12e9f368a94d" (UID: "2bb42291-5675-4a1c-b7be-12e9f368a94d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.843429 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.844230 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bb42291-5675-4a1c-b7be-12e9f368a94d" (UID: "2bb42291-5675-4a1c-b7be-12e9f368a94d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.844871 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9" (OuterVolumeSpecName: "kube-api-access-zj4t9") pod "2bb42291-5675-4a1c-b7be-12e9f368a94d" (UID: "2bb42291-5675-4a1c-b7be-12e9f368a94d"). InnerVolumeSpecName "kube-api-access-zj4t9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.934386 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bb42291-5675-4a1c-b7be-12e9f368a94d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.934992 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bb42291-5675-4a1c-b7be-12e9f368a94d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:03 crc kubenswrapper[5113]: I0121 10:00:03.935011 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj4t9\" (UniqueName: \"kubernetes.io/projected/2bb42291-5675-4a1c-b7be-12e9f368a94d-kube-api-access-zj4t9\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.436364 5113 generic.go:358] "Generic (PLEG): container finished" podID="b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" containerID="044d7f3fc7ccab20edaa366a25fa85f9c8adfc4c414352c2aa854b2c848a3f7d" exitCode=0 Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.436501 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" event={"ID":"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3","Type":"ContainerDied","Data":"044d7f3fc7ccab20edaa366a25fa85f9c8adfc4c414352c2aa854b2c848a3f7d"} Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.441572 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.442187 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483160-jbbrp" event={"ID":"2bb42291-5675-4a1c-b7be-12e9f368a94d","Type":"ContainerDied","Data":"ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4"} Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.442249 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1e6af4bcaae06ea97213e987ecc17bc22d93a5d81482b2a0acaebe2c03a8f4" Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.446956 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d"} Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.546997 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl"] Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.553488 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483115-gpmgl"] Jan 21 10:00:04 crc kubenswrapper[5113]: I0121 10:00:04.854087 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcdec0ee-3553-4c15-ad1f-eb6b29eec33a" path="/var/lib/kubelet/pods/dcdec0ee-3553-4c15-ad1f-eb6b29eec33a/volumes" Jan 21 10:00:05 crc kubenswrapper[5113]: I0121 10:00:05.760468 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:05 crc kubenswrapper[5113]: I0121 10:00:05.868882 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2spql\" (UniqueName: \"kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql\") pod \"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3\" (UID: \"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3\") " Jan 21 10:00:05 crc kubenswrapper[5113]: I0121 10:00:05.880092 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql" (OuterVolumeSpecName: "kube-api-access-2spql") pod "b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" (UID: "b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3"). InnerVolumeSpecName "kube-api-access-2spql". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:00:05 crc kubenswrapper[5113]: I0121 10:00:05.972814 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2spql\" (UniqueName: \"kubernetes.io/projected/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3-kube-api-access-2spql\") on node \"crc\" DevicePath \"\"" Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.471573 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" event={"ID":"b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3","Type":"ContainerDied","Data":"736de068b271b60d29eb4ef2d043921e0107f8ac77eb9f40202b5c4e3248f83d"} Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.471965 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736de068b271b60d29eb4ef2d043921e0107f8ac77eb9f40202b5c4e3248f83d" Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.471713 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483160-ls6tr" Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.832299 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483154-6jr59"] Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.841210 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483154-6jr59"] Jan 21 10:00:06 crc kubenswrapper[5113]: I0121 10:00:06.862077 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e686df-fede-4589-a65e-8699310c39dd" path="/var/lib/kubelet/pods/41e686df-fede-4589-a65e-8699310c39dd/volumes" Jan 21 10:01:02 crc kubenswrapper[5113]: I0121 10:01:02.119808 5113 scope.go:117] "RemoveContainer" containerID="fcc0dca8a59f603c746f368a24d8750249582b240afc7377bebaa3bdb1e96cbb" Jan 21 10:01:02 crc kubenswrapper[5113]: I0121 10:01:02.159385 5113 scope.go:117] "RemoveContainer" containerID="e097c79ed23da7c83aa3c2c72e0ae39c4a830aa4fc2d2374d9207b9103c3526e" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.682037 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683368 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" containerName="oc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683390 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" containerName="oc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683403 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2bb42291-5675-4a1c-b7be-12e9f368a94d" containerName="collect-profiles" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683410 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb42291-5675-4a1c-b7be-12e9f368a94d" containerName="collect-profiles" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683564 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="2bb42291-5675-4a1c-b7be-12e9f368a94d" containerName="collect-profiles" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.683578 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" containerName="oc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.694442 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.697030 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.733495 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh4jv\" (UniqueName: \"kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.733542 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.733625 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.835243 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sh4jv\" (UniqueName: \"kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.835518 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.835596 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.836379 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.837077 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:52 crc kubenswrapper[5113]: I0121 10:01:52.873232 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh4jv\" (UniqueName: \"kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv\") pod \"community-operators-dpbzc\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:53 crc kubenswrapper[5113]: I0121 10:01:53.015415 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:01:53 crc kubenswrapper[5113]: I0121 10:01:53.522691 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:01:53 crc kubenswrapper[5113]: W0121 10:01:53.536856 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0620ab8e_1f82_4a9b_9c0d_d02aaa16eb58.slice/crio-49224ed4729d2c36724e7183a64443581a042cc91828582053cf8e1505ae1d52 WatchSource:0}: Error finding container 49224ed4729d2c36724e7183a64443581a042cc91828582053cf8e1505ae1d52: Status 404 returned error can't find the container with id 49224ed4729d2c36724e7183a64443581a042cc91828582053cf8e1505ae1d52 Jan 21 10:01:53 crc kubenswrapper[5113]: I0121 10:01:53.538629 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:01:54 crc kubenswrapper[5113]: I0121 10:01:54.478692 5113 generic.go:358] "Generic (PLEG): container finished" podID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerID="db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0" exitCode=0 Jan 21 10:01:54 crc kubenswrapper[5113]: I0121 10:01:54.478822 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerDied","Data":"db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0"} Jan 21 10:01:54 crc kubenswrapper[5113]: I0121 10:01:54.479025 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerStarted","Data":"49224ed4729d2c36724e7183a64443581a042cc91828582053cf8e1505ae1d52"} Jan 21 10:01:56 crc kubenswrapper[5113]: I0121 10:01:56.513415 5113 generic.go:358] "Generic (PLEG): container finished" podID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerID="aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a" exitCode=0 Jan 21 10:01:56 crc kubenswrapper[5113]: I0121 10:01:56.513726 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerDied","Data":"aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a"} Jan 21 10:01:57 crc kubenswrapper[5113]: I0121 10:01:57.526372 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerStarted","Data":"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226"} Jan 21 10:01:57 crc kubenswrapper[5113]: I0121 10:01:57.561000 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dpbzc" podStartSLOduration=4.293331624 podStartE2EDuration="5.560973709s" podCreationTimestamp="2026-01-21 10:01:52 +0000 UTC" firstStartedPulling="2026-01-21 10:01:54.480457717 +0000 UTC m=+2643.981284806" lastFinishedPulling="2026-01-21 10:01:55.748099802 +0000 UTC m=+2645.248926891" observedRunningTime="2026-01-21 10:01:57.551110939 +0000 UTC m=+2647.051938028" watchObservedRunningTime="2026-01-21 10:01:57.560973709 +0000 UTC m=+2647.061800788" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.148264 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483162-lhnsv"] Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.172579 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-lhnsv"] Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.172701 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.176729 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.176935 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.177211 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.272722 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n5dj\" (UniqueName: \"kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj\") pod \"auto-csr-approver-29483162-lhnsv\" (UID: \"e4a90851-c23b-406a-8e35-4a894ef1e09d\") " pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.375072 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6n5dj\" (UniqueName: \"kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj\") pod \"auto-csr-approver-29483162-lhnsv\" (UID: \"e4a90851-c23b-406a-8e35-4a894ef1e09d\") " pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.410565 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n5dj\" (UniqueName: \"kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj\") pod \"auto-csr-approver-29483162-lhnsv\" (UID: \"e4a90851-c23b-406a-8e35-4a894ef1e09d\") " pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.499421 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:00 crc kubenswrapper[5113]: I0121 10:02:00.787235 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-lhnsv"] Jan 21 10:02:01 crc kubenswrapper[5113]: I0121 10:02:01.566117 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" event={"ID":"e4a90851-c23b-406a-8e35-4a894ef1e09d","Type":"ContainerStarted","Data":"72cdc97b461812f2b09e804f8a677835a21d919d89244df621202879a027b7c1"} Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.015928 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.018179 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.096007 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.590334 5113 generic.go:358] "Generic (PLEG): container finished" podID="e4a90851-c23b-406a-8e35-4a894ef1e09d" containerID="99462184a6aa43dfdb16a531a896b8800e8f9d154616d8e582c1fc120a2610dd" exitCode=0 Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.590485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" event={"ID":"e4a90851-c23b-406a-8e35-4a894ef1e09d","Type":"ContainerDied","Data":"99462184a6aa43dfdb16a531a896b8800e8f9d154616d8e582c1fc120a2610dd"} Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.661576 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:03 crc kubenswrapper[5113]: I0121 10:02:03.719933 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:02:04 crc kubenswrapper[5113]: I0121 10:02:04.990852 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.070499 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n5dj\" (UniqueName: \"kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj\") pod \"e4a90851-c23b-406a-8e35-4a894ef1e09d\" (UID: \"e4a90851-c23b-406a-8e35-4a894ef1e09d\") " Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.078022 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj" (OuterVolumeSpecName: "kube-api-access-6n5dj") pod "e4a90851-c23b-406a-8e35-4a894ef1e09d" (UID: "e4a90851-c23b-406a-8e35-4a894ef1e09d"). InnerVolumeSpecName "kube-api-access-6n5dj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.172669 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6n5dj\" (UniqueName: \"kubernetes.io/projected/e4a90851-c23b-406a-8e35-4a894ef1e09d-kube-api-access-6n5dj\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.627131 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dpbzc" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="registry-server" containerID="cri-o://724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226" gracePeriod=2 Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.627316 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.627440 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483162-lhnsv" event={"ID":"e4a90851-c23b-406a-8e35-4a894ef1e09d","Type":"ContainerDied","Data":"72cdc97b461812f2b09e804f8a677835a21d919d89244df621202879a027b7c1"} Jan 21 10:02:05 crc kubenswrapper[5113]: I0121 10:02:05.627494 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72cdc97b461812f2b09e804f8a677835a21d919d89244df621202879a027b7c1" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.079796 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483156-2j2xm"] Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.092812 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483156-2j2xm"] Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.138888 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.189791 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content\") pod \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.189906 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities\") pod \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.190010 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh4jv\" (UniqueName: \"kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv\") pod \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\" (UID: \"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58\") " Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.191273 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities" (OuterVolumeSpecName: "utilities") pod "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" (UID: "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.198174 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv" (OuterVolumeSpecName: "kube-api-access-sh4jv") pod "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" (UID: "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58"). InnerVolumeSpecName "kube-api-access-sh4jv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.251778 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" (UID: "0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.291482 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.291513 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.291522 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sh4jv\" (UniqueName: \"kubernetes.io/projected/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58-kube-api-access-sh4jv\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.640867 5113 generic.go:358] "Generic (PLEG): container finished" podID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerID="724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226" exitCode=0 Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.641018 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dpbzc" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.641032 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerDied","Data":"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226"} Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.641161 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dpbzc" event={"ID":"0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58","Type":"ContainerDied","Data":"49224ed4729d2c36724e7183a64443581a042cc91828582053cf8e1505ae1d52"} Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.641192 5113 scope.go:117] "RemoveContainer" containerID="724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.674559 5113 scope.go:117] "RemoveContainer" containerID="aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.725777 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.735059 5113 scope.go:117] "RemoveContainer" containerID="db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.743437 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dpbzc"] Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.759566 5113 scope.go:117] "RemoveContainer" containerID="724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226" Jan 21 10:02:06 crc kubenswrapper[5113]: E0121 10:02:06.760112 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226\": container with ID starting with 724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226 not found: ID does not exist" containerID="724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.760174 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226"} err="failed to get container status \"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226\": rpc error: code = NotFound desc = could not find container \"724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226\": container with ID starting with 724696e7ad927baa8a771eb636f93a411d35995221ff5d13a560cccc45126226 not found: ID does not exist" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.760200 5113 scope.go:117] "RemoveContainer" containerID="aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a" Jan 21 10:02:06 crc kubenswrapper[5113]: E0121 10:02:06.760516 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a\": container with ID starting with aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a not found: ID does not exist" containerID="aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.760631 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a"} err="failed to get container status \"aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a\": rpc error: code = NotFound desc = could not find container \"aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a\": container with ID starting with aff14103b7061588eff96dc5ee9cfb0dc3ceca23af2aa597f4a0048b067dcb1a not found: ID does not exist" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.760754 5113 scope.go:117] "RemoveContainer" containerID="db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0" Jan 21 10:02:06 crc kubenswrapper[5113]: E0121 10:02:06.761107 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0\": container with ID starting with db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0 not found: ID does not exist" containerID="db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.761134 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0"} err="failed to get container status \"db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0\": rpc error: code = NotFound desc = could not find container \"db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0\": container with ID starting with db70e6bfc943f9ee854b5d2b60b243c0f6e75705a18c7038cbdf81017f7b75b0 not found: ID does not exist" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.867001 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" path="/var/lib/kubelet/pods/0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58/volumes" Jan 21 10:02:06 crc kubenswrapper[5113]: I0121 10:02:06.868206 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706c9d74-2b42-4584-b005-ba8c14f26de0" path="/var/lib/kubelet/pods/706c9d74-2b42-4584-b005-ba8c14f26de0/volumes" Jan 21 10:02:28 crc kubenswrapper[5113]: I0121 10:02:28.340957 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:02:28 crc kubenswrapper[5113]: I0121 10:02:28.341636 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.160844 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162535 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="registry-server" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162565 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="registry-server" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162584 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="extract-content" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162596 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="extract-content" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162647 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4a90851-c23b-406a-8e35-4a894ef1e09d" containerName="oc" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162660 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a90851-c23b-406a-8e35-4a894ef1e09d" containerName="oc" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162684 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="extract-utilities" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162695 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="extract-utilities" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162940 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="0620ab8e-1f82-4a9b-9c0d-d02aaa16eb58" containerName="registry-server" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.162970 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4a90851-c23b-406a-8e35-4a894ef1e09d" containerName="oc" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.178691 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.208064 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.279824 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd4tm\" (UniqueName: \"kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.279882 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.279941 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.381623 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vd4tm\" (UniqueName: \"kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.381720 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.381841 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.382244 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.382593 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.402861 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd4tm\" (UniqueName: \"kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm\") pod \"redhat-operators-54gpl\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.508534 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.751905 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:32 crc kubenswrapper[5113]: I0121 10:02:32.901857 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerStarted","Data":"c72884a8daeeffd608cb0400abd1c3866b5e030e565e648b092d7daccbf5419a"} Jan 21 10:02:33 crc kubenswrapper[5113]: I0121 10:02:33.912234 5113 generic.go:358] "Generic (PLEG): container finished" podID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerID="8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb" exitCode=0 Jan 21 10:02:33 crc kubenswrapper[5113]: I0121 10:02:33.912290 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerDied","Data":"8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb"} Jan 21 10:02:35 crc kubenswrapper[5113]: I0121 10:02:35.933080 5113 generic.go:358] "Generic (PLEG): container finished" podID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerID="293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe" exitCode=0 Jan 21 10:02:35 crc kubenswrapper[5113]: I0121 10:02:35.933346 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerDied","Data":"293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe"} Jan 21 10:02:36 crc kubenswrapper[5113]: I0121 10:02:36.951723 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerStarted","Data":"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607"} Jan 21 10:02:36 crc kubenswrapper[5113]: I0121 10:02:36.982222 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-54gpl" podStartSLOduration=3.934542337 podStartE2EDuration="4.98219168s" podCreationTimestamp="2026-01-21 10:02:32 +0000 UTC" firstStartedPulling="2026-01-21 10:02:33.91335515 +0000 UTC m=+2683.414182199" lastFinishedPulling="2026-01-21 10:02:34.961004443 +0000 UTC m=+2684.461831542" observedRunningTime="2026-01-21 10:02:36.973600976 +0000 UTC m=+2686.474428065" watchObservedRunningTime="2026-01-21 10:02:36.98219168 +0000 UTC m=+2686.483018769" Jan 21 10:02:42 crc kubenswrapper[5113]: I0121 10:02:42.509052 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:42 crc kubenswrapper[5113]: I0121 10:02:42.509866 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:42 crc kubenswrapper[5113]: I0121 10:02:42.588727 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:43 crc kubenswrapper[5113]: I0121 10:02:43.077822 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:43 crc kubenswrapper[5113]: I0121 10:02:43.147039 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.023386 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-54gpl" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="registry-server" containerID="cri-o://b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607" gracePeriod=2 Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.535644 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.600432 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content\") pod \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.600883 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities\") pod \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.601088 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd4tm\" (UniqueName: \"kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm\") pod \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\" (UID: \"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e\") " Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.602658 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities" (OuterVolumeSpecName: "utilities") pod "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" (UID: "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.615071 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm" (OuterVolumeSpecName: "kube-api-access-vd4tm") pod "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" (UID: "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e"). InnerVolumeSpecName "kube-api-access-vd4tm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.702875 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.702912 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vd4tm\" (UniqueName: \"kubernetes.io/projected/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-kube-api-access-vd4tm\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:45 crc kubenswrapper[5113]: I0121 10:02:45.952353 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" (UID: "1d9cd59d-0c87-442b-bc47-8e19bc1abe6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.008475 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.035037 5113 generic.go:358] "Generic (PLEG): container finished" podID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerID="b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607" exitCode=0 Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.035548 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerDied","Data":"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607"} Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.035618 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-54gpl" event={"ID":"1d9cd59d-0c87-442b-bc47-8e19bc1abe6e","Type":"ContainerDied","Data":"c72884a8daeeffd608cb0400abd1c3866b5e030e565e648b092d7daccbf5419a"} Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.035642 5113 scope.go:117] "RemoveContainer" containerID="b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.035907 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-54gpl" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.064385 5113 scope.go:117] "RemoveContainer" containerID="293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.080034 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.087144 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-54gpl"] Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.110200 5113 scope.go:117] "RemoveContainer" containerID="8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.142658 5113 scope.go:117] "RemoveContainer" containerID="b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607" Jan 21 10:02:46 crc kubenswrapper[5113]: E0121 10:02:46.143453 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607\": container with ID starting with b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607 not found: ID does not exist" containerID="b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.143499 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607"} err="failed to get container status \"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607\": rpc error: code = NotFound desc = could not find container \"b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607\": container with ID starting with b21094835e880d64b8e55c63889891675e09eaec72197ce2c65606fb24f9a607 not found: ID does not exist" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.143534 5113 scope.go:117] "RemoveContainer" containerID="293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe" Jan 21 10:02:46 crc kubenswrapper[5113]: E0121 10:02:46.144212 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe\": container with ID starting with 293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe not found: ID does not exist" containerID="293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.144253 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe"} err="failed to get container status \"293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe\": rpc error: code = NotFound desc = could not find container \"293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe\": container with ID starting with 293ca522749c603fc8970a12cb0c6f31e865b5769312dfdf496b92f7807bc3fe not found: ID does not exist" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.144278 5113 scope.go:117] "RemoveContainer" containerID="8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb" Jan 21 10:02:46 crc kubenswrapper[5113]: E0121 10:02:46.145062 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb\": container with ID starting with 8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb not found: ID does not exist" containerID="8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.145094 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb"} err="failed to get container status \"8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb\": rpc error: code = NotFound desc = could not find container \"8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb\": container with ID starting with 8709741e119f4695fe6c5e58a1ce937f4b27216eb23fecf6580e415d284be8eb not found: ID does not exist" Jan 21 10:02:46 crc kubenswrapper[5113]: I0121 10:02:46.852608 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" path="/var/lib/kubelet/pods/1d9cd59d-0c87-442b-bc47-8e19bc1abe6e/volumes" Jan 21 10:02:52 crc kubenswrapper[5113]: I0121 10:02:52.111544 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:02:52 crc kubenswrapper[5113]: I0121 10:02:52.117091 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:02:52 crc kubenswrapper[5113]: I0121 10:02:52.124406 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:02:52 crc kubenswrapper[5113]: I0121 10:02:52.127554 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:02:58 crc kubenswrapper[5113]: I0121 10:02:58.340518 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:02:58 crc kubenswrapper[5113]: I0121 10:02:58.341057 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.327369 5113 scope.go:117] "RemoveContainer" containerID="fef0c9e8bcfa9ab8da849b5df96ff7aea18143093c7bf251a681efd14d3ea0d6" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.510826 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513368 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="extract-utilities" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513402 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="extract-utilities" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513425 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="registry-server" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513435 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="registry-server" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513485 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="extract-content" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513496 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="extract-content" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.513679 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d9cd59d-0c87-442b-bc47-8e19bc1abe6e" containerName="registry-server" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.726391 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.726689 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.810896 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.811223 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ps79\" (UniqueName: \"kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.811330 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.912195 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8ps79\" (UniqueName: \"kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.912433 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.912601 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.913069 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.913123 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:02 crc kubenswrapper[5113]: I0121 10:03:02.934652 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ps79\" (UniqueName: \"kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79\") pod \"certified-operators-lfhp2\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:03 crc kubenswrapper[5113]: I0121 10:03:03.045874 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:03 crc kubenswrapper[5113]: I0121 10:03:03.508715 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:04 crc kubenswrapper[5113]: I0121 10:03:04.254104 5113 generic.go:358] "Generic (PLEG): container finished" podID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerID="2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5" exitCode=0 Jan 21 10:03:04 crc kubenswrapper[5113]: I0121 10:03:04.254186 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerDied","Data":"2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5"} Jan 21 10:03:04 crc kubenswrapper[5113]: I0121 10:03:04.254573 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerStarted","Data":"811910b5baf4015a8f59d914a46846a886dc18bebde044906a927887563e18c7"} Jan 21 10:03:07 crc kubenswrapper[5113]: I0121 10:03:07.293426 5113 generic.go:358] "Generic (PLEG): container finished" podID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerID="bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378" exitCode=0 Jan 21 10:03:07 crc kubenswrapper[5113]: I0121 10:03:07.293514 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerDied","Data":"bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378"} Jan 21 10:03:08 crc kubenswrapper[5113]: I0121 10:03:08.311818 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerStarted","Data":"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb"} Jan 21 10:03:08 crc kubenswrapper[5113]: I0121 10:03:08.339180 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lfhp2" podStartSLOduration=4.421701074 podStartE2EDuration="6.339154288s" podCreationTimestamp="2026-01-21 10:03:02 +0000 UTC" firstStartedPulling="2026-01-21 10:03:04.255626937 +0000 UTC m=+2713.756454026" lastFinishedPulling="2026-01-21 10:03:06.173080151 +0000 UTC m=+2715.673907240" observedRunningTime="2026-01-21 10:03:08.334834425 +0000 UTC m=+2717.835661504" watchObservedRunningTime="2026-01-21 10:03:08.339154288 +0000 UTC m=+2717.839981347" Jan 21 10:03:13 crc kubenswrapper[5113]: I0121 10:03:13.046988 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:13 crc kubenswrapper[5113]: I0121 10:03:13.047647 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:13 crc kubenswrapper[5113]: I0121 10:03:13.109553 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:13 crc kubenswrapper[5113]: I0121 10:03:13.411141 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:14 crc kubenswrapper[5113]: I0121 10:03:14.492330 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:15 crc kubenswrapper[5113]: I0121 10:03:15.373632 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lfhp2" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="registry-server" containerID="cri-o://93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb" gracePeriod=2 Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.341478 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.377635 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities\") pod \"c8ff6bc6-6452-4080-bf05-d199c628db93\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.377795 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ps79\" (UniqueName: \"kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79\") pod \"c8ff6bc6-6452-4080-bf05-d199c628db93\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.377864 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content\") pod \"c8ff6bc6-6452-4080-bf05-d199c628db93\" (UID: \"c8ff6bc6-6452-4080-bf05-d199c628db93\") " Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.380413 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities" (OuterVolumeSpecName: "utilities") pod "c8ff6bc6-6452-4080-bf05-d199c628db93" (UID: "c8ff6bc6-6452-4080-bf05-d199c628db93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.383170 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79" (OuterVolumeSpecName: "kube-api-access-8ps79") pod "c8ff6bc6-6452-4080-bf05-d199c628db93" (UID: "c8ff6bc6-6452-4080-bf05-d199c628db93"). InnerVolumeSpecName "kube-api-access-8ps79". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.389147 5113 generic.go:358] "Generic (PLEG): container finished" podID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerID="93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb" exitCode=0 Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.389207 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerDied","Data":"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb"} Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.389243 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lfhp2" event={"ID":"c8ff6bc6-6452-4080-bf05-d199c628db93","Type":"ContainerDied","Data":"811910b5baf4015a8f59d914a46846a886dc18bebde044906a927887563e18c7"} Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.389269 5113 scope.go:117] "RemoveContainer" containerID="93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.389450 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lfhp2" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.433049 5113 scope.go:117] "RemoveContainer" containerID="bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.452065 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8ff6bc6-6452-4080-bf05-d199c628db93" (UID: "c8ff6bc6-6452-4080-bf05-d199c628db93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.460689 5113 scope.go:117] "RemoveContainer" containerID="2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.480658 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.480698 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8ff6bc6-6452-4080-bf05-d199c628db93-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.480711 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8ps79\" (UniqueName: \"kubernetes.io/projected/c8ff6bc6-6452-4080-bf05-d199c628db93-kube-api-access-8ps79\") on node \"crc\" DevicePath \"\"" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.491974 5113 scope.go:117] "RemoveContainer" containerID="93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb" Jan 21 10:03:16 crc kubenswrapper[5113]: E0121 10:03:16.492433 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb\": container with ID starting with 93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb not found: ID does not exist" containerID="93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.492483 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb"} err="failed to get container status \"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb\": rpc error: code = NotFound desc = could not find container \"93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb\": container with ID starting with 93d7be2970b40ab9dccfbaa917998d295a0a0a5e5443b71f4fcc6dfc89139deb not found: ID does not exist" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.492515 5113 scope.go:117] "RemoveContainer" containerID="bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378" Jan 21 10:03:16 crc kubenswrapper[5113]: E0121 10:03:16.492827 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378\": container with ID starting with bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378 not found: ID does not exist" containerID="bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.492873 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378"} err="failed to get container status \"bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378\": rpc error: code = NotFound desc = could not find container \"bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378\": container with ID starting with bba1b426e3ba980cfdac4017b9ca5641e8480bb64cc9e8addbfe784521607378 not found: ID does not exist" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.492900 5113 scope.go:117] "RemoveContainer" containerID="2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5" Jan 21 10:03:16 crc kubenswrapper[5113]: E0121 10:03:16.493123 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5\": container with ID starting with 2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5 not found: ID does not exist" containerID="2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.493156 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5"} err="failed to get container status \"2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5\": rpc error: code = NotFound desc = could not find container \"2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5\": container with ID starting with 2e5466c11f7ec28aa4e423b857acf4c427453a5a5202f3a62e8ecf01453ab8a5 not found: ID does not exist" Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.732473 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.741392 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lfhp2"] Jan 21 10:03:16 crc kubenswrapper[5113]: I0121 10:03:16.857308 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" path="/var/lib/kubelet/pods/c8ff6bc6-6452-4080-bf05-d199c628db93/volumes" Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.339846 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.340369 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.340427 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.341199 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.341283 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d" gracePeriod=600 Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.519226 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d" exitCode=0 Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.519453 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d"} Jan 21 10:03:28 crc kubenswrapper[5113]: I0121 10:03:28.519491 5113 scope.go:117] "RemoveContainer" containerID="ee5539020b573801c63958627ee7726587f769091e02d7a0cbaac2ca07a86658" Jan 21 10:03:29 crc kubenswrapper[5113]: I0121 10:03:29.533717 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f"} Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.146031 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483164-247tm"] Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.149637 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="registry-server" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.149863 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="registry-server" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.150064 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="extract-content" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.150243 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="extract-content" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.150385 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="extract-utilities" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.150525 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="extract-utilities" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.150961 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c8ff6bc6-6452-4080-bf05-d199c628db93" containerName="registry-server" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.166045 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-247tm"] Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.166417 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.172674 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.172789 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.173610 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.248059 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx\") pod \"auto-csr-approver-29483164-247tm\" (UID: \"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27\") " pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.350365 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx\") pod \"auto-csr-approver-29483164-247tm\" (UID: \"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27\") " pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.381575 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx\") pod \"auto-csr-approver-29483164-247tm\" (UID: \"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27\") " pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.497512 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.763670 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-247tm"] Jan 21 10:04:00 crc kubenswrapper[5113]: W0121 10:04:00.777955 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83be5f80_4ab3_464d_b5a5_72cf5f4fdd27.slice/crio-f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b WatchSource:0}: Error finding container f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b: Status 404 returned error can't find the container with id f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b Jan 21 10:04:00 crc kubenswrapper[5113]: I0121 10:04:00.862488 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-247tm" event={"ID":"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27","Type":"ContainerStarted","Data":"f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b"} Jan 21 10:04:03 crc kubenswrapper[5113]: I0121 10:04:03.892040 5113 generic.go:358] "Generic (PLEG): container finished" podID="83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" containerID="6e37a127987e94604837fadcda1ca1e1b43647b8c6afadb41c795c11e654ebcd" exitCode=0 Jan 21 10:04:03 crc kubenswrapper[5113]: I0121 10:04:03.892177 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-247tm" event={"ID":"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27","Type":"ContainerDied","Data":"6e37a127987e94604837fadcda1ca1e1b43647b8c6afadb41c795c11e654ebcd"} Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.249434 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.336414 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx\") pod \"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27\" (UID: \"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27\") " Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.348443 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx" (OuterVolumeSpecName: "kube-api-access-k75hx") pod "83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" (UID: "83be5f80-4ab3-464d-b5a5-72cf5f4fdd27"). InnerVolumeSpecName "kube-api-access-k75hx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.438199 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k75hx\" (UniqueName: \"kubernetes.io/projected/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27-kube-api-access-k75hx\") on node \"crc\" DevicePath \"\"" Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.916079 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483164-247tm" Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.916096 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483164-247tm" event={"ID":"83be5f80-4ab3-464d-b5a5-72cf5f4fdd27","Type":"ContainerDied","Data":"f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b"} Jan 21 10:04:05 crc kubenswrapper[5113]: I0121 10:04:05.916638 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f119979f95b5105c015e39f7336bbf4969e7b29690c1551c415a31c6cf83ad4b" Jan 21 10:04:06 crc kubenswrapper[5113]: I0121 10:04:06.320881 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483158-mfgcx"] Jan 21 10:04:06 crc kubenswrapper[5113]: I0121 10:04:06.327990 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483158-mfgcx"] Jan 21 10:04:06 crc kubenswrapper[5113]: I0121 10:04:06.868827 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0104258e-ab5b-4f01-bd81-773277032f6a" path="/var/lib/kubelet/pods/0104258e-ab5b-4f01-bd81-773277032f6a/volumes" Jan 21 10:05:02 crc kubenswrapper[5113]: I0121 10:05:02.619215 5113 scope.go:117] "RemoveContainer" containerID="af697446c91f69917c6620eb7608200ae1e1a13650e6e32eef2f9740457da2a2" Jan 21 10:05:28 crc kubenswrapper[5113]: I0121 10:05:28.339780 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:05:28 crc kubenswrapper[5113]: I0121 10:05:28.341889 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:05:58 crc kubenswrapper[5113]: I0121 10:05:58.339994 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:05:58 crc kubenswrapper[5113]: I0121 10:05:58.340809 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.157785 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483166-fgkrk"] Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.159370 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" containerName="oc" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.159439 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" containerName="oc" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.159833 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" containerName="oc" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.177951 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-fgkrk"] Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.178086 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.183401 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.185092 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.185395 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.233286 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkgz6\" (UniqueName: \"kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6\") pod \"auto-csr-approver-29483166-fgkrk\" (UID: \"069d79c3-4497-4996-8728-5e7dacd3e248\") " pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.335605 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wkgz6\" (UniqueName: \"kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6\") pod \"auto-csr-approver-29483166-fgkrk\" (UID: \"069d79c3-4497-4996-8728-5e7dacd3e248\") " pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.367284 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkgz6\" (UniqueName: \"kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6\") pod \"auto-csr-approver-29483166-fgkrk\" (UID: \"069d79c3-4497-4996-8728-5e7dacd3e248\") " pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.514830 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:00 crc kubenswrapper[5113]: I0121 10:06:00.796862 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-fgkrk"] Jan 21 10:06:01 crc kubenswrapper[5113]: I0121 10:06:01.051195 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" event={"ID":"069d79c3-4497-4996-8728-5e7dacd3e248","Type":"ContainerStarted","Data":"09ac6be511168b29f5181db9893dec4c86f1f3ca8998db8a460d9b9ebdf3458d"} Jan 21 10:06:02 crc kubenswrapper[5113]: I0121 10:06:02.063096 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" event={"ID":"069d79c3-4497-4996-8728-5e7dacd3e248","Type":"ContainerStarted","Data":"9141c942ecfe33c22a29e5f34d8877dbb70fcb89cdfc4f1eba77ea8108d56fdd"} Jan 21 10:06:02 crc kubenswrapper[5113]: I0121 10:06:02.091335 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" podStartSLOduration=1.220705603 podStartE2EDuration="2.091274312s" podCreationTimestamp="2026-01-21 10:06:00 +0000 UTC" firstStartedPulling="2026-01-21 10:06:00.811466252 +0000 UTC m=+2890.312293341" lastFinishedPulling="2026-01-21 10:06:01.682034961 +0000 UTC m=+2891.182862050" observedRunningTime="2026-01-21 10:06:02.08238659 +0000 UTC m=+2891.583213679" watchObservedRunningTime="2026-01-21 10:06:02.091274312 +0000 UTC m=+2891.592101401" Jan 21 10:06:03 crc kubenswrapper[5113]: I0121 10:06:03.073375 5113 generic.go:358] "Generic (PLEG): container finished" podID="069d79c3-4497-4996-8728-5e7dacd3e248" containerID="9141c942ecfe33c22a29e5f34d8877dbb70fcb89cdfc4f1eba77ea8108d56fdd" exitCode=0 Jan 21 10:06:03 crc kubenswrapper[5113]: I0121 10:06:03.073434 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" event={"ID":"069d79c3-4497-4996-8728-5e7dacd3e248","Type":"ContainerDied","Data":"9141c942ecfe33c22a29e5f34d8877dbb70fcb89cdfc4f1eba77ea8108d56fdd"} Jan 21 10:06:04 crc kubenswrapper[5113]: I0121 10:06:04.414035 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:04 crc kubenswrapper[5113]: I0121 10:06:04.505011 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkgz6\" (UniqueName: \"kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6\") pod \"069d79c3-4497-4996-8728-5e7dacd3e248\" (UID: \"069d79c3-4497-4996-8728-5e7dacd3e248\") " Jan 21 10:06:04 crc kubenswrapper[5113]: I0121 10:06:04.520798 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6" (OuterVolumeSpecName: "kube-api-access-wkgz6") pod "069d79c3-4497-4996-8728-5e7dacd3e248" (UID: "069d79c3-4497-4996-8728-5e7dacd3e248"). InnerVolumeSpecName "kube-api-access-wkgz6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:06:04 crc kubenswrapper[5113]: I0121 10:06:04.607995 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wkgz6\" (UniqueName: \"kubernetes.io/projected/069d79c3-4497-4996-8728-5e7dacd3e248-kube-api-access-wkgz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:06:05 crc kubenswrapper[5113]: I0121 10:06:05.094536 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" event={"ID":"069d79c3-4497-4996-8728-5e7dacd3e248","Type":"ContainerDied","Data":"09ac6be511168b29f5181db9893dec4c86f1f3ca8998db8a460d9b9ebdf3458d"} Jan 21 10:06:05 crc kubenswrapper[5113]: I0121 10:06:05.094887 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ac6be511168b29f5181db9893dec4c86f1f3ca8998db8a460d9b9ebdf3458d" Jan 21 10:06:05 crc kubenswrapper[5113]: I0121 10:06:05.094548 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483166-fgkrk" Jan 21 10:06:05 crc kubenswrapper[5113]: I0121 10:06:05.165196 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-ls6tr"] Jan 21 10:06:05 crc kubenswrapper[5113]: I0121 10:06:05.174665 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483160-ls6tr"] Jan 21 10:06:06 crc kubenswrapper[5113]: I0121 10:06:06.858106 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3" path="/var/lib/kubelet/pods/b42489fb-b8cb-4ccd-8aad-d5f2c09b02c3/volumes" Jan 21 10:06:28 crc kubenswrapper[5113]: I0121 10:06:28.339471 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:06:28 crc kubenswrapper[5113]: I0121 10:06:28.340221 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:06:28 crc kubenswrapper[5113]: I0121 10:06:28.340289 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:06:28 crc kubenswrapper[5113]: I0121 10:06:28.341583 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:06:28 crc kubenswrapper[5113]: I0121 10:06:28.341698 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" gracePeriod=600 Jan 21 10:06:28 crc kubenswrapper[5113]: E0121 10:06:28.531714 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:06:29 crc kubenswrapper[5113]: I0121 10:06:29.378552 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" exitCode=0 Jan 21 10:06:29 crc kubenswrapper[5113]: I0121 10:06:29.378709 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f"} Jan 21 10:06:29 crc kubenswrapper[5113]: I0121 10:06:29.378808 5113 scope.go:117] "RemoveContainer" containerID="b98820d48460bd73ec9a158e9d5d327f25887cad19e28743f6fc869fcf62fe1d" Jan 21 10:06:29 crc kubenswrapper[5113]: I0121 10:06:29.379424 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:06:29 crc kubenswrapper[5113]: E0121 10:06:29.379865 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:06:40 crc kubenswrapper[5113]: I0121 10:06:40.855267 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:06:40 crc kubenswrapper[5113]: E0121 10:06:40.856361 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:06:53 crc kubenswrapper[5113]: I0121 10:06:53.844405 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:06:53 crc kubenswrapper[5113]: E0121 10:06:53.845595 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:07:02 crc kubenswrapper[5113]: I0121 10:07:02.766757 5113 scope.go:117] "RemoveContainer" containerID="044d7f3fc7ccab20edaa366a25fa85f9c8adfc4c414352c2aa854b2c848a3f7d" Jan 21 10:07:06 crc kubenswrapper[5113]: I0121 10:07:06.849849 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:07:06 crc kubenswrapper[5113]: E0121 10:07:06.855220 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:07:17 crc kubenswrapper[5113]: I0121 10:07:17.844252 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:07:17 crc kubenswrapper[5113]: E0121 10:07:17.845779 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:07:32 crc kubenswrapper[5113]: I0121 10:07:32.844225 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:07:32 crc kubenswrapper[5113]: E0121 10:07:32.844964 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:07:45 crc kubenswrapper[5113]: I0121 10:07:45.844036 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:07:45 crc kubenswrapper[5113]: E0121 10:07:45.845224 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:07:52 crc kubenswrapper[5113]: I0121 10:07:52.236566 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:07:52 crc kubenswrapper[5113]: I0121 10:07:52.253495 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:07:52 crc kubenswrapper[5113]: I0121 10:07:52.254053 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:07:52 crc kubenswrapper[5113]: I0121 10:07:52.263905 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.146257 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483168-qkqrd"] Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.147572 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="069d79c3-4497-4996-8728-5e7dacd3e248" containerName="oc" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.147586 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="069d79c3-4497-4996-8728-5e7dacd3e248" containerName="oc" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.147689 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="069d79c3-4497-4996-8728-5e7dacd3e248" containerName="oc" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.153760 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.159083 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.159643 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.159906 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.171064 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-qkqrd"] Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.184401 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rk9\" (UniqueName: \"kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9\") pod \"auto-csr-approver-29483168-qkqrd\" (UID: \"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85\") " pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.286603 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-77rk9\" (UniqueName: \"kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9\") pod \"auto-csr-approver-29483168-qkqrd\" (UID: \"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85\") " pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.329678 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rk9\" (UniqueName: \"kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9\") pod \"auto-csr-approver-29483168-qkqrd\" (UID: \"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85\") " pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.528463 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:00 crc kubenswrapper[5113]: I0121 10:08:00.856426 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:08:00 crc kubenswrapper[5113]: E0121 10:08:00.857175 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:08:01 crc kubenswrapper[5113]: I0121 10:08:01.037363 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-qkqrd"] Jan 21 10:08:01 crc kubenswrapper[5113]: I0121 10:08:01.042800 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:08:01 crc kubenswrapper[5113]: I0121 10:08:01.309709 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" event={"ID":"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85","Type":"ContainerStarted","Data":"6cbbc9ba8f31caf3e47618e370587dcd0adacf8b33f37a786700a9e47398bae3"} Jan 21 10:08:02 crc kubenswrapper[5113]: I0121 10:08:02.325204 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" event={"ID":"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85","Type":"ContainerStarted","Data":"5c4dc5b1e0cf7bc0f14a711c11eedf750b84c90e3e59b561d7e45c8c410bcc75"} Jan 21 10:08:02 crc kubenswrapper[5113]: I0121 10:08:02.353182 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" podStartSLOduration=1.497025804 podStartE2EDuration="2.353150727s" podCreationTimestamp="2026-01-21 10:08:00 +0000 UTC" firstStartedPulling="2026-01-21 10:08:01.043208568 +0000 UTC m=+3010.544035647" lastFinishedPulling="2026-01-21 10:08:01.899333491 +0000 UTC m=+3011.400160570" observedRunningTime="2026-01-21 10:08:02.346072346 +0000 UTC m=+3011.846899435" watchObservedRunningTime="2026-01-21 10:08:02.353150727 +0000 UTC m=+3011.853977816" Jan 21 10:08:03 crc kubenswrapper[5113]: I0121 10:08:03.337242 5113 generic.go:358] "Generic (PLEG): container finished" podID="d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" containerID="5c4dc5b1e0cf7bc0f14a711c11eedf750b84c90e3e59b561d7e45c8c410bcc75" exitCode=0 Jan 21 10:08:03 crc kubenswrapper[5113]: I0121 10:08:03.337513 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" event={"ID":"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85","Type":"ContainerDied","Data":"5c4dc5b1e0cf7bc0f14a711c11eedf750b84c90e3e59b561d7e45c8c410bcc75"} Jan 21 10:08:04 crc kubenswrapper[5113]: I0121 10:08:04.691781 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:04 crc kubenswrapper[5113]: I0121 10:08:04.768460 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77rk9\" (UniqueName: \"kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9\") pod \"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85\" (UID: \"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85\") " Jan 21 10:08:04 crc kubenswrapper[5113]: I0121 10:08:04.778244 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9" (OuterVolumeSpecName: "kube-api-access-77rk9") pod "d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" (UID: "d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85"). InnerVolumeSpecName "kube-api-access-77rk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:08:04 crc kubenswrapper[5113]: I0121 10:08:04.870877 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77rk9\" (UniqueName: \"kubernetes.io/projected/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85-kube-api-access-77rk9\") on node \"crc\" DevicePath \"\"" Jan 21 10:08:05 crc kubenswrapper[5113]: I0121 10:08:05.359704 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" Jan 21 10:08:05 crc kubenswrapper[5113]: I0121 10:08:05.359680 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483168-qkqrd" event={"ID":"d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85","Type":"ContainerDied","Data":"6cbbc9ba8f31caf3e47618e370587dcd0adacf8b33f37a786700a9e47398bae3"} Jan 21 10:08:05 crc kubenswrapper[5113]: I0121 10:08:05.360668 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cbbc9ba8f31caf3e47618e370587dcd0adacf8b33f37a786700a9e47398bae3" Jan 21 10:08:05 crc kubenswrapper[5113]: I0121 10:08:05.425445 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-lhnsv"] Jan 21 10:08:05 crc kubenswrapper[5113]: I0121 10:08:05.434390 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483162-lhnsv"] Jan 21 10:08:06 crc kubenswrapper[5113]: I0121 10:08:06.874551 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a90851-c23b-406a-8e35-4a894ef1e09d" path="/var/lib/kubelet/pods/e4a90851-c23b-406a-8e35-4a894ef1e09d/volumes" Jan 21 10:08:13 crc kubenswrapper[5113]: I0121 10:08:13.843553 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:08:13 crc kubenswrapper[5113]: E0121 10:08:13.846492 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:08:28 crc kubenswrapper[5113]: I0121 10:08:28.844763 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:08:28 crc kubenswrapper[5113]: E0121 10:08:28.846053 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:08:41 crc kubenswrapper[5113]: I0121 10:08:41.843296 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:08:41 crc kubenswrapper[5113]: E0121 10:08:41.848672 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:08:56 crc kubenswrapper[5113]: I0121 10:08:56.846044 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:08:56 crc kubenswrapper[5113]: E0121 10:08:56.847370 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:09:02 crc kubenswrapper[5113]: I0121 10:09:02.954167 5113 scope.go:117] "RemoveContainer" containerID="99462184a6aa43dfdb16a531a896b8800e8f9d154616d8e582c1fc120a2610dd" Jan 21 10:09:07 crc kubenswrapper[5113]: I0121 10:09:07.845165 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:09:07 crc kubenswrapper[5113]: E0121 10:09:07.846297 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:09:19 crc kubenswrapper[5113]: I0121 10:09:19.844011 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:09:19 crc kubenswrapper[5113]: E0121 10:09:19.845812 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:09:30 crc kubenswrapper[5113]: I0121 10:09:30.864693 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:09:30 crc kubenswrapper[5113]: E0121 10:09:30.866185 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:09:41 crc kubenswrapper[5113]: I0121 10:09:41.844335 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:09:41 crc kubenswrapper[5113]: E0121 10:09:41.845471 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:09:54 crc kubenswrapper[5113]: I0121 10:09:54.843705 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:09:54 crc kubenswrapper[5113]: E0121 10:09:54.844654 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.150480 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483170-tlfsp"] Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.154177 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" containerName="oc" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.154217 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" containerName="oc" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.155077 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" containerName="oc" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.167551 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-tlfsp"] Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.167765 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.170858 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.170934 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.170953 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.215482 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qtw\" (UniqueName: \"kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw\") pod \"auto-csr-approver-29483170-tlfsp\" (UID: \"b6c8e241-2f98-483f-8fce-5de20dbd7b77\") " pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.317085 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qtw\" (UniqueName: \"kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw\") pod \"auto-csr-approver-29483170-tlfsp\" (UID: \"b6c8e241-2f98-483f-8fce-5de20dbd7b77\") " pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.341605 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qtw\" (UniqueName: \"kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw\") pod \"auto-csr-approver-29483170-tlfsp\" (UID: \"b6c8e241-2f98-483f-8fce-5de20dbd7b77\") " pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.495993 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:00 crc kubenswrapper[5113]: I0121 10:10:00.832907 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-tlfsp"] Jan 21 10:10:01 crc kubenswrapper[5113]: I0121 10:10:01.518596 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" event={"ID":"b6c8e241-2f98-483f-8fce-5de20dbd7b77","Type":"ContainerStarted","Data":"55312844d7956c286a893e40c0d3428ab9e35238e0ebb53e189cf709bdcfbc24"} Jan 21 10:10:02 crc kubenswrapper[5113]: I0121 10:10:02.539388 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" event={"ID":"b6c8e241-2f98-483f-8fce-5de20dbd7b77","Type":"ContainerStarted","Data":"cb6fe12127443d0d026e26fad8f10470181573c8c25e3b7bac9da21b7069099e"} Jan 21 10:10:02 crc kubenswrapper[5113]: I0121 10:10:02.560715 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" podStartSLOduration=1.467433676 podStartE2EDuration="2.56068683s" podCreationTimestamp="2026-01-21 10:10:00 +0000 UTC" firstStartedPulling="2026-01-21 10:10:00.840297914 +0000 UTC m=+3130.341124973" lastFinishedPulling="2026-01-21 10:10:01.933551048 +0000 UTC m=+3131.434378127" observedRunningTime="2026-01-21 10:10:02.56034715 +0000 UTC m=+3132.061174249" watchObservedRunningTime="2026-01-21 10:10:02.56068683 +0000 UTC m=+3132.061513919" Jan 21 10:10:03 crc kubenswrapper[5113]: I0121 10:10:03.553972 5113 generic.go:358] "Generic (PLEG): container finished" podID="b6c8e241-2f98-483f-8fce-5de20dbd7b77" containerID="cb6fe12127443d0d026e26fad8f10470181573c8c25e3b7bac9da21b7069099e" exitCode=0 Jan 21 10:10:03 crc kubenswrapper[5113]: I0121 10:10:03.554171 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" event={"ID":"b6c8e241-2f98-483f-8fce-5de20dbd7b77","Type":"ContainerDied","Data":"cb6fe12127443d0d026e26fad8f10470181573c8c25e3b7bac9da21b7069099e"} Jan 21 10:10:04 crc kubenswrapper[5113]: I0121 10:10:04.917497 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.114218 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qtw\" (UniqueName: \"kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw\") pod \"b6c8e241-2f98-483f-8fce-5de20dbd7b77\" (UID: \"b6c8e241-2f98-483f-8fce-5de20dbd7b77\") " Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.125311 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw" (OuterVolumeSpecName: "kube-api-access-w4qtw") pod "b6c8e241-2f98-483f-8fce-5de20dbd7b77" (UID: "b6c8e241-2f98-483f-8fce-5de20dbd7b77"). InnerVolumeSpecName "kube-api-access-w4qtw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.216078 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4qtw\" (UniqueName: \"kubernetes.io/projected/b6c8e241-2f98-483f-8fce-5de20dbd7b77-kube-api-access-w4qtw\") on node \"crc\" DevicePath \"\"" Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.576512 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" event={"ID":"b6c8e241-2f98-483f-8fce-5de20dbd7b77","Type":"ContainerDied","Data":"55312844d7956c286a893e40c0d3428ab9e35238e0ebb53e189cf709bdcfbc24"} Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.576911 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55312844d7956c286a893e40c0d3428ab9e35238e0ebb53e189cf709bdcfbc24" Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.577023 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483170-tlfsp" Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.653443 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-247tm"] Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.660279 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483164-247tm"] Jan 21 10:10:05 crc kubenswrapper[5113]: I0121 10:10:05.843197 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:10:05 crc kubenswrapper[5113]: E0121 10:10:05.843856 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:10:06 crc kubenswrapper[5113]: I0121 10:10:06.868363 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83be5f80-4ab3-464d-b5a5-72cf5f4fdd27" path="/var/lib/kubelet/pods/83be5f80-4ab3-464d-b5a5-72cf5f4fdd27/volumes" Jan 21 10:10:16 crc kubenswrapper[5113]: I0121 10:10:16.844649 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:10:16 crc kubenswrapper[5113]: E0121 10:10:16.846306 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:10:31 crc kubenswrapper[5113]: I0121 10:10:31.844458 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:10:31 crc kubenswrapper[5113]: E0121 10:10:31.845675 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:10:44 crc kubenswrapper[5113]: I0121 10:10:44.844218 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:10:44 crc kubenswrapper[5113]: E0121 10:10:44.845505 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:10:56 crc kubenswrapper[5113]: I0121 10:10:56.845803 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:10:56 crc kubenswrapper[5113]: E0121 10:10:56.846957 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:11:03 crc kubenswrapper[5113]: I0121 10:11:03.138630 5113 scope.go:117] "RemoveContainer" containerID="6e37a127987e94604837fadcda1ca1e1b43647b8c6afadb41c795c11e654ebcd" Jan 21 10:11:08 crc kubenswrapper[5113]: I0121 10:11:08.844131 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:11:08 crc kubenswrapper[5113]: E0121 10:11:08.846593 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:11:20 crc kubenswrapper[5113]: I0121 10:11:20.844523 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:11:20 crc kubenswrapper[5113]: E0121 10:11:20.845387 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:11:35 crc kubenswrapper[5113]: I0121 10:11:35.844815 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:11:36 crc kubenswrapper[5113]: I0121 10:11:36.483073 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d"} Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.173629 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483172-rt5rk"] Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.176343 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6c8e241-2f98-483f-8fce-5de20dbd7b77" containerName="oc" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.176417 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c8e241-2f98-483f-8fce-5de20dbd7b77" containerName="oc" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.176706 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6c8e241-2f98-483f-8fce-5de20dbd7b77" containerName="oc" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.192373 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-rt5rk"] Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.192515 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.203555 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.203660 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.204002 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.242509 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvmb\" (UniqueName: \"kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb\") pod \"auto-csr-approver-29483172-rt5rk\" (UID: \"40989b1d-8b29-4203-8b56-2f9d9043e609\") " pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.344021 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rvmb\" (UniqueName: \"kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb\") pod \"auto-csr-approver-29483172-rt5rk\" (UID: \"40989b1d-8b29-4203-8b56-2f9d9043e609\") " pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.378891 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rvmb\" (UniqueName: \"kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb\") pod \"auto-csr-approver-29483172-rt5rk\" (UID: \"40989b1d-8b29-4203-8b56-2f9d9043e609\") " pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:00 crc kubenswrapper[5113]: I0121 10:12:00.529924 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:01 crc kubenswrapper[5113]: I0121 10:12:01.079643 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-rt5rk"] Jan 21 10:12:01 crc kubenswrapper[5113]: I0121 10:12:01.744223 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" event={"ID":"40989b1d-8b29-4203-8b56-2f9d9043e609","Type":"ContainerStarted","Data":"3127dded18118b3d45c641943883ef9fee539645a3b049f46c649e8fd749bb49"} Jan 21 10:12:02 crc kubenswrapper[5113]: I0121 10:12:02.753581 5113 generic.go:358] "Generic (PLEG): container finished" podID="40989b1d-8b29-4203-8b56-2f9d9043e609" containerID="1c0328aa303f1c6405f40c0228e21970976ef4361d78b75e9f52634136321bde" exitCode=0 Jan 21 10:12:02 crc kubenswrapper[5113]: I0121 10:12:02.753919 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" event={"ID":"40989b1d-8b29-4203-8b56-2f9d9043e609","Type":"ContainerDied","Data":"1c0328aa303f1c6405f40c0228e21970976ef4361d78b75e9f52634136321bde"} Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.144421 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.240827 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rvmb\" (UniqueName: \"kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb\") pod \"40989b1d-8b29-4203-8b56-2f9d9043e609\" (UID: \"40989b1d-8b29-4203-8b56-2f9d9043e609\") " Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.248292 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb" (OuterVolumeSpecName: "kube-api-access-9rvmb") pod "40989b1d-8b29-4203-8b56-2f9d9043e609" (UID: "40989b1d-8b29-4203-8b56-2f9d9043e609"). InnerVolumeSpecName "kube-api-access-9rvmb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.343083 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rvmb\" (UniqueName: \"kubernetes.io/projected/40989b1d-8b29-4203-8b56-2f9d9043e609-kube-api-access-9rvmb\") on node \"crc\" DevicePath \"\"" Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.794772 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.794819 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483172-rt5rk" event={"ID":"40989b1d-8b29-4203-8b56-2f9d9043e609","Type":"ContainerDied","Data":"3127dded18118b3d45c641943883ef9fee539645a3b049f46c649e8fd749bb49"} Jan 21 10:12:04 crc kubenswrapper[5113]: I0121 10:12:04.794889 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3127dded18118b3d45c641943883ef9fee539645a3b049f46c649e8fd749bb49" Jan 21 10:12:05 crc kubenswrapper[5113]: I0121 10:12:05.236397 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-fgkrk"] Jan 21 10:12:05 crc kubenswrapper[5113]: I0121 10:12:05.246701 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483166-fgkrk"] Jan 21 10:12:06 crc kubenswrapper[5113]: I0121 10:12:06.858385 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069d79c3-4497-4996-8728-5e7dacd3e248" path="/var/lib/kubelet/pods/069d79c3-4497-4996-8728-5e7dacd3e248/volumes" Jan 21 10:12:52 crc kubenswrapper[5113]: I0121 10:12:52.420979 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:12:52 crc kubenswrapper[5113]: I0121 10:12:52.428036 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:12:52 crc kubenswrapper[5113]: I0121 10:12:52.431474 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:12:52 crc kubenswrapper[5113]: I0121 10:12:52.438716 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:13:03 crc kubenswrapper[5113]: I0121 10:13:03.316231 5113 scope.go:117] "RemoveContainer" containerID="9141c942ecfe33c22a29e5f34d8877dbb70fcb89cdfc4f1eba77ea8108d56fdd" Jan 21 10:13:58 crc kubenswrapper[5113]: I0121 10:13:58.340208 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:13:58 crc kubenswrapper[5113]: I0121 10:13:58.341176 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.164218 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483174-jxj8t"] Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.165841 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40989b1d-8b29-4203-8b56-2f9d9043e609" containerName="oc" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.165866 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="40989b1d-8b29-4203-8b56-2f9d9043e609" containerName="oc" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.166092 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="40989b1d-8b29-4203-8b56-2f9d9043e609" containerName="oc" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.244270 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-jxj8t"] Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.244417 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.246941 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.247086 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.248180 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.376320 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hncxh\" (UniqueName: \"kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh\") pod \"auto-csr-approver-29483174-jxj8t\" (UID: \"f290d176-4d18-448b-89bf-bcbca2e60113\") " pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.478119 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hncxh\" (UniqueName: \"kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh\") pod \"auto-csr-approver-29483174-jxj8t\" (UID: \"f290d176-4d18-448b-89bf-bcbca2e60113\") " pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.522653 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hncxh\" (UniqueName: \"kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh\") pod \"auto-csr-approver-29483174-jxj8t\" (UID: \"f290d176-4d18-448b-89bf-bcbca2e60113\") " pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.576492 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.878544 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-jxj8t"] Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.880475 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:14:00 crc kubenswrapper[5113]: I0121 10:14:00.977212 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" event={"ID":"f290d176-4d18-448b-89bf-bcbca2e60113","Type":"ContainerStarted","Data":"f51f43ee7cf8a7bf30a31f8f028c85188ffc36e4d32b2e7dcebed51527231876"} Jan 21 10:14:03 crc kubenswrapper[5113]: I0121 10:14:03.001523 5113 generic.go:358] "Generic (PLEG): container finished" podID="f290d176-4d18-448b-89bf-bcbca2e60113" containerID="796fd580db1887868c2bf84ea620da929b6c5c9477b0e8594f4968f5bb77be8f" exitCode=0 Jan 21 10:14:03 crc kubenswrapper[5113]: I0121 10:14:03.001609 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" event={"ID":"f290d176-4d18-448b-89bf-bcbca2e60113","Type":"ContainerDied","Data":"796fd580db1887868c2bf84ea620da929b6c5c9477b0e8594f4968f5bb77be8f"} Jan 21 10:14:04 crc kubenswrapper[5113]: I0121 10:14:04.400033 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:04 crc kubenswrapper[5113]: I0121 10:14:04.555020 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hncxh\" (UniqueName: \"kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh\") pod \"f290d176-4d18-448b-89bf-bcbca2e60113\" (UID: \"f290d176-4d18-448b-89bf-bcbca2e60113\") " Jan 21 10:14:04 crc kubenswrapper[5113]: I0121 10:14:04.564474 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh" (OuterVolumeSpecName: "kube-api-access-hncxh") pod "f290d176-4d18-448b-89bf-bcbca2e60113" (UID: "f290d176-4d18-448b-89bf-bcbca2e60113"). InnerVolumeSpecName "kube-api-access-hncxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:14:04 crc kubenswrapper[5113]: I0121 10:14:04.657021 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hncxh\" (UniqueName: \"kubernetes.io/projected/f290d176-4d18-448b-89bf-bcbca2e60113-kube-api-access-hncxh\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:05 crc kubenswrapper[5113]: I0121 10:14:05.022119 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" event={"ID":"f290d176-4d18-448b-89bf-bcbca2e60113","Type":"ContainerDied","Data":"f51f43ee7cf8a7bf30a31f8f028c85188ffc36e4d32b2e7dcebed51527231876"} Jan 21 10:14:05 crc kubenswrapper[5113]: I0121 10:14:05.022197 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f51f43ee7cf8a7bf30a31f8f028c85188ffc36e4d32b2e7dcebed51527231876" Jan 21 10:14:05 crc kubenswrapper[5113]: I0121 10:14:05.022136 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483174-jxj8t" Jan 21 10:14:05 crc kubenswrapper[5113]: I0121 10:14:05.532152 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-qkqrd"] Jan 21 10:14:05 crc kubenswrapper[5113]: I0121 10:14:05.543547 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483168-qkqrd"] Jan 21 10:14:06 crc kubenswrapper[5113]: I0121 10:14:06.857590 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85" path="/var/lib/kubelet/pods/d3d9a3d5-9fbe-4662-9536-9d5a2ce34a85/volumes" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.384360 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.386402 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f290d176-4d18-448b-89bf-bcbca2e60113" containerName="oc" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.386435 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="f290d176-4d18-448b-89bf-bcbca2e60113" containerName="oc" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.386875 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="f290d176-4d18-448b-89bf-bcbca2e60113" containerName="oc" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.531013 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.531194 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.647730 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klzr\" (UniqueName: \"kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.648197 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.648257 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.750495 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.750564 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.750677 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4klzr\" (UniqueName: \"kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.752037 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.752496 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.785186 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klzr\" (UniqueName: \"kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr\") pod \"certified-operators-4xw52\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:14 crc kubenswrapper[5113]: I0121 10:14:14.852092 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:15 crc kubenswrapper[5113]: I0121 10:14:15.350901 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:16 crc kubenswrapper[5113]: I0121 10:14:16.138511 5113 generic.go:358] "Generic (PLEG): container finished" podID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerID="9068bd609179298903c4bdc62022ed42d467325e1944fccb43d5555b5ad2e072" exitCode=0 Jan 21 10:14:16 crc kubenswrapper[5113]: I0121 10:14:16.138940 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerDied","Data":"9068bd609179298903c4bdc62022ed42d467325e1944fccb43d5555b5ad2e072"} Jan 21 10:14:16 crc kubenswrapper[5113]: I0121 10:14:16.141462 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerStarted","Data":"dfee2deee463907399978f54b4854b8b5b4cab52b04a184c5413b8f0a30e52d8"} Jan 21 10:14:17 crc kubenswrapper[5113]: I0121 10:14:17.152675 5113 generic.go:358] "Generic (PLEG): container finished" podID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerID="7067ddcc7d8779624534a59da7a4f0d510561d9418683e2ef91e4838166827b1" exitCode=0 Jan 21 10:14:17 crc kubenswrapper[5113]: I0121 10:14:17.153975 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerDied","Data":"7067ddcc7d8779624534a59da7a4f0d510561d9418683e2ef91e4838166827b1"} Jan 21 10:14:18 crc kubenswrapper[5113]: I0121 10:14:18.214184 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerStarted","Data":"81d429e970d5d2a740a8a41f6ffec7247b08a5c975e4f77187b775486c65ea28"} Jan 21 10:14:18 crc kubenswrapper[5113]: I0121 10:14:18.251294 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4xw52" podStartSLOduration=3.717810547 podStartE2EDuration="4.251265386s" podCreationTimestamp="2026-01-21 10:14:14 +0000 UTC" firstStartedPulling="2026-01-21 10:14:16.141113748 +0000 UTC m=+3385.641940817" lastFinishedPulling="2026-01-21 10:14:16.674568597 +0000 UTC m=+3386.175395656" observedRunningTime="2026-01-21 10:14:18.248877479 +0000 UTC m=+3387.749704568" watchObservedRunningTime="2026-01-21 10:14:18.251265386 +0000 UTC m=+3387.752092475" Jan 21 10:14:24 crc kubenswrapper[5113]: I0121 10:14:24.858813 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:24 crc kubenswrapper[5113]: I0121 10:14:24.859261 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:24 crc kubenswrapper[5113]: I0121 10:14:24.916094 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:25 crc kubenswrapper[5113]: I0121 10:14:25.329078 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:25 crc kubenswrapper[5113]: I0121 10:14:25.379445 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:27 crc kubenswrapper[5113]: I0121 10:14:27.291158 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4xw52" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="registry-server" containerID="cri-o://81d429e970d5d2a740a8a41f6ffec7247b08a5c975e4f77187b775486c65ea28" gracePeriod=2 Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.305762 5113 generic.go:358] "Generic (PLEG): container finished" podID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerID="81d429e970d5d2a740a8a41f6ffec7247b08a5c975e4f77187b775486c65ea28" exitCode=0 Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.305872 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerDied","Data":"81d429e970d5d2a740a8a41f6ffec7247b08a5c975e4f77187b775486c65ea28"} Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.340538 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.340610 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.375241 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.484590 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities\") pod \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.484900 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content\") pod \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.485044 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klzr\" (UniqueName: \"kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr\") pod \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\" (UID: \"411e182b-63f0-46d0-ab6a-8d98f893b8c5\") " Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.486382 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities" (OuterVolumeSpecName: "utilities") pod "411e182b-63f0-46d0-ab6a-8d98f893b8c5" (UID: "411e182b-63f0-46d0-ab6a-8d98f893b8c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.492091 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr" (OuterVolumeSpecName: "kube-api-access-4klzr") pod "411e182b-63f0-46d0-ab6a-8d98f893b8c5" (UID: "411e182b-63f0-46d0-ab6a-8d98f893b8c5"). InnerVolumeSpecName "kube-api-access-4klzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.524876 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "411e182b-63f0-46d0-ab6a-8d98f893b8c5" (UID: "411e182b-63f0-46d0-ab6a-8d98f893b8c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.586669 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.586713 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/411e182b-63f0-46d0-ab6a-8d98f893b8c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:28 crc kubenswrapper[5113]: I0121 10:14:28.586747 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4klzr\" (UniqueName: \"kubernetes.io/projected/411e182b-63f0-46d0-ab6a-8d98f893b8c5-kube-api-access-4klzr\") on node \"crc\" DevicePath \"\"" Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.321877 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4xw52" Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.321891 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4xw52" event={"ID":"411e182b-63f0-46d0-ab6a-8d98f893b8c5","Type":"ContainerDied","Data":"dfee2deee463907399978f54b4854b8b5b4cab52b04a184c5413b8f0a30e52d8"} Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.322520 5113 scope.go:117] "RemoveContainer" containerID="81d429e970d5d2a740a8a41f6ffec7247b08a5c975e4f77187b775486c65ea28" Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.360005 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.363830 5113 scope.go:117] "RemoveContainer" containerID="7067ddcc7d8779624534a59da7a4f0d510561d9418683e2ef91e4838166827b1" Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.374298 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4xw52"] Jan 21 10:14:29 crc kubenswrapper[5113]: I0121 10:14:29.409597 5113 scope.go:117] "RemoveContainer" containerID="9068bd609179298903c4bdc62022ed42d467325e1944fccb43d5555b5ad2e072" Jan 21 10:14:30 crc kubenswrapper[5113]: I0121 10:14:30.860137 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" path="/var/lib/kubelet/pods/411e182b-63f0-46d0-ab6a-8d98f893b8c5/volumes" Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.340180 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.340876 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.340962 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.342007 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.342136 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d" gracePeriod=600 Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.596657 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d" exitCode=0 Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.596769 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d"} Jan 21 10:14:58 crc kubenswrapper[5113]: I0121 10:14:58.597375 5113 scope.go:117] "RemoveContainer" containerID="a632198cd672a90b72f5e279e21531758de9f10569214cbdb0b46bdbfae29b1f" Jan 21 10:14:59 crc kubenswrapper[5113]: I0121 10:14:59.612104 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23"} Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.163798 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8"] Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165126 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="extract-utilities" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165170 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="extract-utilities" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165229 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="registry-server" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165247 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="registry-server" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165347 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="extract-content" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165362 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="extract-content" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.165611 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="411e182b-63f0-46d0-ab6a-8d98f893b8c5" containerName="registry-server" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.172077 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.174507 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.175894 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.180390 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8"] Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.233895 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmwdd\" (UniqueName: \"kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.233943 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.233974 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.335835 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gmwdd\" (UniqueName: \"kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.336161 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.336205 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.337223 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.348291 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.366388 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmwdd\" (UniqueName: \"kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd\") pod \"collect-profiles-29483175-pgjw8\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.499127 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:00 crc kubenswrapper[5113]: I0121 10:15:00.879595 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8"] Jan 21 10:15:01 crc kubenswrapper[5113]: I0121 10:15:01.633245 5113 generic.go:358] "Generic (PLEG): container finished" podID="6a2467fb-01c0-4711-923b-4cd48faaf3fb" containerID="ef0d6e58c35d7675d2b1514316d217a2f68f0001319ed71f7444fb6156e98463" exitCode=0 Jan 21 10:15:01 crc kubenswrapper[5113]: I0121 10:15:01.633328 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" event={"ID":"6a2467fb-01c0-4711-923b-4cd48faaf3fb","Type":"ContainerDied","Data":"ef0d6e58c35d7675d2b1514316d217a2f68f0001319ed71f7444fb6156e98463"} Jan 21 10:15:01 crc kubenswrapper[5113]: I0121 10:15:01.633728 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" event={"ID":"6a2467fb-01c0-4711-923b-4cd48faaf3fb","Type":"ContainerStarted","Data":"02623453ca805ffceaff353e14972024b8605f4a6a0c959ef051e0e11ca08569"} Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.031176 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.081624 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume\") pod \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.081692 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume\") pod \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.081718 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmwdd\" (UniqueName: \"kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd\") pod \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\" (UID: \"6a2467fb-01c0-4711-923b-4cd48faaf3fb\") " Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.083650 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "6a2467fb-01c0-4711-923b-4cd48faaf3fb" (UID: "6a2467fb-01c0-4711-923b-4cd48faaf3fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.089204 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6a2467fb-01c0-4711-923b-4cd48faaf3fb" (UID: "6a2467fb-01c0-4711-923b-4cd48faaf3fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.094163 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd" (OuterVolumeSpecName: "kube-api-access-gmwdd") pod "6a2467fb-01c0-4711-923b-4cd48faaf3fb" (UID: "6a2467fb-01c0-4711-923b-4cd48faaf3fb"). InnerVolumeSpecName "kube-api-access-gmwdd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.183548 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2467fb-01c0-4711-923b-4cd48faaf3fb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.183604 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6a2467fb-01c0-4711-923b-4cd48faaf3fb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.183623 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmwdd\" (UniqueName: \"kubernetes.io/projected/6a2467fb-01c0-4711-923b-4cd48faaf3fb-kube-api-access-gmwdd\") on node \"crc\" DevicePath \"\"" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.485274 5113 scope.go:117] "RemoveContainer" containerID="5c4dc5b1e0cf7bc0f14a711c11eedf750b84c90e3e59b561d7e45c8c410bcc75" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.655139 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" event={"ID":"6a2467fb-01c0-4711-923b-4cd48faaf3fb","Type":"ContainerDied","Data":"02623453ca805ffceaff353e14972024b8605f4a6a0c959ef051e0e11ca08569"} Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.655232 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02623453ca805ffceaff353e14972024b8605f4a6a0c959ef051e0e11ca08569" Jan 21 10:15:03 crc kubenswrapper[5113]: I0121 10:15:03.655181 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483175-pgjw8" Jan 21 10:15:04 crc kubenswrapper[5113]: I0121 10:15:04.127594 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97"] Jan 21 10:15:04 crc kubenswrapper[5113]: I0121 10:15:04.137334 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483130-xkb97"] Jan 21 10:15:04 crc kubenswrapper[5113]: I0121 10:15:04.860115 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc41973b-d903-4dfd-854d-6da1717bc76e" path="/var/lib/kubelet/pods/cc41973b-d903-4dfd-854d-6da1717bc76e/volumes" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.152570 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483176-pcbgh"] Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.154113 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a2467fb-01c0-4711-923b-4cd48faaf3fb" containerName="collect-profiles" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.154138 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2467fb-01c0-4711-923b-4cd48faaf3fb" containerName="collect-profiles" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.154670 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a2467fb-01c0-4711-923b-4cd48faaf3fb" containerName="collect-profiles" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.162631 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-pcbgh"] Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.162848 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.168256 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.169015 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.169463 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.294622 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wpm\" (UniqueName: \"kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm\") pod \"auto-csr-approver-29483176-pcbgh\" (UID: \"fce39ddd-5e7a-4843-961b-964a2cccfe1d\") " pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.396417 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wpm\" (UniqueName: \"kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm\") pod \"auto-csr-approver-29483176-pcbgh\" (UID: \"fce39ddd-5e7a-4843-961b-964a2cccfe1d\") " pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.445931 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wpm\" (UniqueName: \"kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm\") pod \"auto-csr-approver-29483176-pcbgh\" (UID: \"fce39ddd-5e7a-4843-961b-964a2cccfe1d\") " pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.522939 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:00 crc kubenswrapper[5113]: I0121 10:16:00.968177 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-pcbgh"] Jan 21 10:16:00 crc kubenswrapper[5113]: W0121 10:16:00.977053 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfce39ddd_5e7a_4843_961b_964a2cccfe1d.slice/crio-2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e WatchSource:0}: Error finding container 2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e: Status 404 returned error can't find the container with id 2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e Jan 21 10:16:01 crc kubenswrapper[5113]: I0121 10:16:01.601948 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" event={"ID":"fce39ddd-5e7a-4843-961b-964a2cccfe1d","Type":"ContainerStarted","Data":"2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e"} Jan 21 10:16:02 crc kubenswrapper[5113]: I0121 10:16:02.613660 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" event={"ID":"fce39ddd-5e7a-4843-961b-964a2cccfe1d","Type":"ContainerStarted","Data":"f465d21fe2143aee264cd70018485e5506f4329322a510b03eb7e3de98ee0a91"} Jan 21 10:16:02 crc kubenswrapper[5113]: I0121 10:16:02.638576 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" podStartSLOduration=1.705958389 podStartE2EDuration="2.638554367s" podCreationTimestamp="2026-01-21 10:16:00 +0000 UTC" firstStartedPulling="2026-01-21 10:16:00.981201356 +0000 UTC m=+3490.482028415" lastFinishedPulling="2026-01-21 10:16:01.913797304 +0000 UTC m=+3491.414624393" observedRunningTime="2026-01-21 10:16:02.630968145 +0000 UTC m=+3492.131795204" watchObservedRunningTime="2026-01-21 10:16:02.638554367 +0000 UTC m=+3492.139381426" Jan 21 10:16:03 crc kubenswrapper[5113]: I0121 10:16:03.624061 5113 generic.go:358] "Generic (PLEG): container finished" podID="fce39ddd-5e7a-4843-961b-964a2cccfe1d" containerID="f465d21fe2143aee264cd70018485e5506f4329322a510b03eb7e3de98ee0a91" exitCode=0 Jan 21 10:16:03 crc kubenswrapper[5113]: I0121 10:16:03.624123 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" event={"ID":"fce39ddd-5e7a-4843-961b-964a2cccfe1d","Type":"ContainerDied","Data":"f465d21fe2143aee264cd70018485e5506f4329322a510b03eb7e3de98ee0a91"} Jan 21 10:16:03 crc kubenswrapper[5113]: I0121 10:16:03.680092 5113 scope.go:117] "RemoveContainer" containerID="0b3c844b04444e58eb5f5492cb43b305cf14fa6a24471c7dfeb8bdecb5cdc73e" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.018610 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.189906 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5wpm\" (UniqueName: \"kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm\") pod \"fce39ddd-5e7a-4843-961b-964a2cccfe1d\" (UID: \"fce39ddd-5e7a-4843-961b-964a2cccfe1d\") " Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.200205 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm" (OuterVolumeSpecName: "kube-api-access-r5wpm") pod "fce39ddd-5e7a-4843-961b-964a2cccfe1d" (UID: "fce39ddd-5e7a-4843-961b-964a2cccfe1d"). InnerVolumeSpecName "kube-api-access-r5wpm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.292914 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5wpm\" (UniqueName: \"kubernetes.io/projected/fce39ddd-5e7a-4843-961b-964a2cccfe1d-kube-api-access-r5wpm\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.643437 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.643484 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483176-pcbgh" event={"ID":"fce39ddd-5e7a-4843-961b-964a2cccfe1d","Type":"ContainerDied","Data":"2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e"} Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.643539 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9f1c6731220aadd53980f0d4c29d1c60972eed8c673d1c3f6c1b252e3c688e" Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.703088 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-tlfsp"] Jan 21 10:16:05 crc kubenswrapper[5113]: I0121 10:16:05.709491 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483170-tlfsp"] Jan 21 10:16:06 crc kubenswrapper[5113]: I0121 10:16:06.857270 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c8e241-2f98-483f-8fce-5de20dbd7b77" path="/var/lib/kubelet/pods/b6c8e241-2f98-483f-8fce-5de20dbd7b77/volumes" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.008595 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.010080 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fce39ddd-5e7a-4843-961b-964a2cccfe1d" containerName="oc" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.010096 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce39ddd-5e7a-4843-961b-964a2cccfe1d" containerName="oc" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.010311 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="fce39ddd-5e7a-4843-961b-964a2cccfe1d" containerName="oc" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.015222 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.037360 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.093621 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt87d\" (UniqueName: \"kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.093685 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.093755 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.195445 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rt87d\" (UniqueName: \"kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.195498 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.195570 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.196277 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.196276 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.219253 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt87d\" (UniqueName: \"kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d\") pod \"redhat-operators-hdvh7\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.382030 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.610402 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.834469 5113 generic.go:358] "Generic (PLEG): container finished" podID="ca0d2f41-3394-4725-a47b-b4095950996d" containerID="d969100a4e8b12baed6eff568b7a963cafb3c7779e7b07552e7049d69538f14c" exitCode=0 Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.834565 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerDied","Data":"d969100a4e8b12baed6eff568b7a963cafb3c7779e7b07552e7049d69538f14c"} Jan 21 10:16:25 crc kubenswrapper[5113]: I0121 10:16:25.834874 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerStarted","Data":"e703693cb0ccc3851f29b20476601a33ef6f6173a5b6f80b0d03fa4e39dc940b"} Jan 21 10:16:27 crc kubenswrapper[5113]: I0121 10:16:27.880161 5113 generic.go:358] "Generic (PLEG): container finished" podID="ca0d2f41-3394-4725-a47b-b4095950996d" containerID="a54912dbb5d86f7efd1df2b73d3e26486bc248b13fe6e5b6ec723eedb828b2f3" exitCode=0 Jan 21 10:16:27 crc kubenswrapper[5113]: I0121 10:16:27.880804 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerDied","Data":"a54912dbb5d86f7efd1df2b73d3e26486bc248b13fe6e5b6ec723eedb828b2f3"} Jan 21 10:16:28 crc kubenswrapper[5113]: I0121 10:16:28.891389 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerStarted","Data":"099c63621b2fb0ffca3f9d16e4f0370372a9b8e5f306ff029c4544f7c02b1498"} Jan 21 10:16:28 crc kubenswrapper[5113]: I0121 10:16:28.914945 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdvh7" podStartSLOduration=3.878593094 podStartE2EDuration="4.91492474s" podCreationTimestamp="2026-01-21 10:16:24 +0000 UTC" firstStartedPulling="2026-01-21 10:16:25.83553077 +0000 UTC m=+3515.336357819" lastFinishedPulling="2026-01-21 10:16:26.871862386 +0000 UTC m=+3516.372689465" observedRunningTime="2026-01-21 10:16:28.909650153 +0000 UTC m=+3518.410477202" watchObservedRunningTime="2026-01-21 10:16:28.91492474 +0000 UTC m=+3518.415751789" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.076794 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.107482 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.107624 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.174441 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4wqp\" (UniqueName: \"kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.174495 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.174538 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.276240 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v4wqp\" (UniqueName: \"kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.276300 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.276335 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.276826 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.276907 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.314783 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4wqp\" (UniqueName: \"kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp\") pod \"community-operators-z6zvl\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.433218 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:29 crc kubenswrapper[5113]: I0121 10:16:29.896553 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:29 crc kubenswrapper[5113]: W0121 10:16:29.906213 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod659c70d6_7c33_4b73_8e2a_0919c73302e0.slice/crio-978eeffc185c39d2f25985469e0ede36b8343921a2be9a8e92fe40e0afb8c1eb WatchSource:0}: Error finding container 978eeffc185c39d2f25985469e0ede36b8343921a2be9a8e92fe40e0afb8c1eb: Status 404 returned error can't find the container with id 978eeffc185c39d2f25985469e0ede36b8343921a2be9a8e92fe40e0afb8c1eb Jan 21 10:16:30 crc kubenswrapper[5113]: I0121 10:16:30.916540 5113 generic.go:358] "Generic (PLEG): container finished" podID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerID="10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc" exitCode=0 Jan 21 10:16:30 crc kubenswrapper[5113]: I0121 10:16:30.916615 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerDied","Data":"10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc"} Jan 21 10:16:30 crc kubenswrapper[5113]: I0121 10:16:30.917121 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerStarted","Data":"978eeffc185c39d2f25985469e0ede36b8343921a2be9a8e92fe40e0afb8c1eb"} Jan 21 10:16:32 crc kubenswrapper[5113]: I0121 10:16:32.949169 5113 generic.go:358] "Generic (PLEG): container finished" podID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerID="e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591" exitCode=0 Jan 21 10:16:32 crc kubenswrapper[5113]: I0121 10:16:32.950018 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerDied","Data":"e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591"} Jan 21 10:16:34 crc kubenswrapper[5113]: I0121 10:16:34.977498 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerStarted","Data":"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad"} Jan 21 10:16:35 crc kubenswrapper[5113]: I0121 10:16:35.382396 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:35 crc kubenswrapper[5113]: I0121 10:16:35.382839 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:35 crc kubenswrapper[5113]: I0121 10:16:35.447133 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:35 crc kubenswrapper[5113]: I0121 10:16:35.473429 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6zvl" podStartSLOduration=5.590470804 podStartE2EDuration="6.473401364s" podCreationTimestamp="2026-01-21 10:16:29 +0000 UTC" firstStartedPulling="2026-01-21 10:16:30.91841966 +0000 UTC m=+3520.419246749" lastFinishedPulling="2026-01-21 10:16:31.80135026 +0000 UTC m=+3521.302177309" observedRunningTime="2026-01-21 10:16:35.002327037 +0000 UTC m=+3524.503154096" watchObservedRunningTime="2026-01-21 10:16:35.473401364 +0000 UTC m=+3524.974228453" Jan 21 10:16:36 crc kubenswrapper[5113]: I0121 10:16:36.060721 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:37 crc kubenswrapper[5113]: I0121 10:16:37.394945 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:38 crc kubenswrapper[5113]: I0121 10:16:38.010954 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdvh7" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="registry-server" containerID="cri-o://099c63621b2fb0ffca3f9d16e4f0370372a9b8e5f306ff029c4544f7c02b1498" gracePeriod=2 Jan 21 10:16:39 crc kubenswrapper[5113]: I0121 10:16:39.433891 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:39 crc kubenswrapper[5113]: I0121 10:16:39.434300 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:39 crc kubenswrapper[5113]: I0121 10:16:39.496275 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:40 crc kubenswrapper[5113]: I0121 10:16:40.105584 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:40 crc kubenswrapper[5113]: I0121 10:16:40.997983 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.051268 5113 generic.go:358] "Generic (PLEG): container finished" podID="ca0d2f41-3394-4725-a47b-b4095950996d" containerID="099c63621b2fb0ffca3f9d16e4f0370372a9b8e5f306ff029c4544f7c02b1498" exitCode=0 Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.051358 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerDied","Data":"099c63621b2fb0ffca3f9d16e4f0370372a9b8e5f306ff029c4544f7c02b1498"} Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.052015 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6zvl" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="registry-server" containerID="cri-o://7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad" gracePeriod=2 Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.216487 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.300568 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content\") pod \"ca0d2f41-3394-4725-a47b-b4095950996d\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.300668 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt87d\" (UniqueName: \"kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d\") pod \"ca0d2f41-3394-4725-a47b-b4095950996d\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.300801 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities\") pod \"ca0d2f41-3394-4725-a47b-b4095950996d\" (UID: \"ca0d2f41-3394-4725-a47b-b4095950996d\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.302158 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities" (OuterVolumeSpecName: "utilities") pod "ca0d2f41-3394-4725-a47b-b4095950996d" (UID: "ca0d2f41-3394-4725-a47b-b4095950996d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.319578 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d" (OuterVolumeSpecName: "kube-api-access-rt87d") pod "ca0d2f41-3394-4725-a47b-b4095950996d" (UID: "ca0d2f41-3394-4725-a47b-b4095950996d"). InnerVolumeSpecName "kube-api-access-rt87d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.403037 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.403078 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt87d\" (UniqueName: \"kubernetes.io/projected/ca0d2f41-3394-4725-a47b-b4095950996d-kube-api-access-rt87d\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.438798 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca0d2f41-3394-4725-a47b-b4095950996d" (UID: "ca0d2f41-3394-4725-a47b-b4095950996d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.474010 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.504200 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0d2f41-3394-4725-a47b-b4095950996d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.605255 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities\") pod \"659c70d6-7c33-4b73-8e2a-0919c73302e0\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.605317 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4wqp\" (UniqueName: \"kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp\") pod \"659c70d6-7c33-4b73-8e2a-0919c73302e0\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.605467 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content\") pod \"659c70d6-7c33-4b73-8e2a-0919c73302e0\" (UID: \"659c70d6-7c33-4b73-8e2a-0919c73302e0\") " Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.607973 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities" (OuterVolumeSpecName: "utilities") pod "659c70d6-7c33-4b73-8e2a-0919c73302e0" (UID: "659c70d6-7c33-4b73-8e2a-0919c73302e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.615506 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp" (OuterVolumeSpecName: "kube-api-access-v4wqp") pod "659c70d6-7c33-4b73-8e2a-0919c73302e0" (UID: "659c70d6-7c33-4b73-8e2a-0919c73302e0"). InnerVolumeSpecName "kube-api-access-v4wqp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.662104 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "659c70d6-7c33-4b73-8e2a-0919c73302e0" (UID: "659c70d6-7c33-4b73-8e2a-0919c73302e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.709625 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.709680 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/659c70d6-7c33-4b73-8e2a-0919c73302e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:42 crc kubenswrapper[5113]: I0121 10:16:42.709697 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v4wqp\" (UniqueName: \"kubernetes.io/projected/659c70d6-7c33-4b73-8e2a-0919c73302e0-kube-api-access-v4wqp\") on node \"crc\" DevicePath \"\"" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.061769 5113 generic.go:358] "Generic (PLEG): container finished" podID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerID="7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad" exitCode=0 Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.061895 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6zvl" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.061894 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerDied","Data":"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad"} Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.062037 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6zvl" event={"ID":"659c70d6-7c33-4b73-8e2a-0919c73302e0","Type":"ContainerDied","Data":"978eeffc185c39d2f25985469e0ede36b8343921a2be9a8e92fe40e0afb8c1eb"} Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.062059 5113 scope.go:117] "RemoveContainer" containerID="7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.064015 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdvh7" event={"ID":"ca0d2f41-3394-4725-a47b-b4095950996d","Type":"ContainerDied","Data":"e703693cb0ccc3851f29b20476601a33ef6f6173a5b6f80b0d03fa4e39dc940b"} Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.064112 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdvh7" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.082630 5113 scope.go:117] "RemoveContainer" containerID="e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.094080 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.102982 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdvh7"] Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.106928 5113 scope.go:117] "RemoveContainer" containerID="10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.108154 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.113114 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6zvl"] Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.131883 5113 scope.go:117] "RemoveContainer" containerID="7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad" Jan 21 10:16:43 crc kubenswrapper[5113]: E0121 10:16:43.132191 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad\": container with ID starting with 7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad not found: ID does not exist" containerID="7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132218 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad"} err="failed to get container status \"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad\": rpc error: code = NotFound desc = could not find container \"7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad\": container with ID starting with 7d4e53744d46814cf62824f15678b9e6c1fd2830bf2ba88183943a3f25decbad not found: ID does not exist" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132235 5113 scope.go:117] "RemoveContainer" containerID="e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591" Jan 21 10:16:43 crc kubenswrapper[5113]: E0121 10:16:43.132439 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591\": container with ID starting with e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591 not found: ID does not exist" containerID="e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132454 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591"} err="failed to get container status \"e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591\": rpc error: code = NotFound desc = could not find container \"e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591\": container with ID starting with e3922b11d6148205eb28ecb6f92c3f36e914d1d5c293fe634b186f07d6eb7591 not found: ID does not exist" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132466 5113 scope.go:117] "RemoveContainer" containerID="10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc" Jan 21 10:16:43 crc kubenswrapper[5113]: E0121 10:16:43.132943 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc\": container with ID starting with 10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc not found: ID does not exist" containerID="10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132964 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc"} err="failed to get container status \"10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc\": rpc error: code = NotFound desc = could not find container \"10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc\": container with ID starting with 10eeb3739d7d890335269999c3e2f37fbdda18894f115d962b601510b52b0ffc not found: ID does not exist" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.132975 5113 scope.go:117] "RemoveContainer" containerID="099c63621b2fb0ffca3f9d16e4f0370372a9b8e5f306ff029c4544f7c02b1498" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.148679 5113 scope.go:117] "RemoveContainer" containerID="a54912dbb5d86f7efd1df2b73d3e26486bc248b13fe6e5b6ec723eedb828b2f3" Jan 21 10:16:43 crc kubenswrapper[5113]: I0121 10:16:43.167300 5113 scope.go:117] "RemoveContainer" containerID="d969100a4e8b12baed6eff568b7a963cafb3c7779e7b07552e7049d69538f14c" Jan 21 10:16:44 crc kubenswrapper[5113]: I0121 10:16:44.856941 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" path="/var/lib/kubelet/pods/659c70d6-7c33-4b73-8e2a-0919c73302e0/volumes" Jan 21 10:16:44 crc kubenswrapper[5113]: I0121 10:16:44.858499 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" path="/var/lib/kubelet/pods/ca0d2f41-3394-4725-a47b-b4095950996d/volumes" Jan 21 10:16:58 crc kubenswrapper[5113]: I0121 10:16:58.339958 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:16:58 crc kubenswrapper[5113]: I0121 10:16:58.340576 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:17:03 crc kubenswrapper[5113]: I0121 10:17:03.778528 5113 scope.go:117] "RemoveContainer" containerID="cb6fe12127443d0d026e26fad8f10470181573c8c25e3b7bac9da21b7069099e" Jan 21 10:17:28 crc kubenswrapper[5113]: I0121 10:17:28.340162 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:17:28 crc kubenswrapper[5113]: I0121 10:17:28.340891 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:17:52 crc kubenswrapper[5113]: I0121 10:17:52.575237 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:17:52 crc kubenswrapper[5113]: I0121 10:17:52.588115 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:17:52 crc kubenswrapper[5113]: I0121 10:17:52.589336 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:17:52 crc kubenswrapper[5113]: I0121 10:17:52.600268 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.339711 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.340283 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.340330 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.340906 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.340966 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" gracePeriod=600 Jan 21 10:17:58 crc kubenswrapper[5113]: E0121 10:17:58.476641 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.788493 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" exitCode=0 Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.788576 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23"} Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.788610 5113 scope.go:117] "RemoveContainer" containerID="54f41eed3d358a3564691b0504015bd4ae43e2f72e6737cc1ce3b758639d857d" Jan 21 10:17:58 crc kubenswrapper[5113]: I0121 10:17:58.789244 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:17:58 crc kubenswrapper[5113]: E0121 10:17:58.789505 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.162586 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483178-9ftsj"] Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.163962 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.163983 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164001 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164010 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164021 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164032 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164088 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164100 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="extract-content" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164114 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164124 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164148 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164158 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="extract-utilities" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164341 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="ca0d2f41-3394-4725-a47b-b4095950996d" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.164360 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="659c70d6-7c33-4b73-8e2a-0919c73302e0" containerName="registry-server" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.171472 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.174764 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.175169 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.177772 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-9ftsj"] Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.178053 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.301241 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbp4j\" (UniqueName: \"kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j\") pod \"auto-csr-approver-29483178-9ftsj\" (UID: \"91a2da3f-e2b3-4a48-a8b6-47de1db2c643\") " pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.408919 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbp4j\" (UniqueName: \"kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j\") pod \"auto-csr-approver-29483178-9ftsj\" (UID: \"91a2da3f-e2b3-4a48-a8b6-47de1db2c643\") " pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.438835 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbp4j\" (UniqueName: \"kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j\") pod \"auto-csr-approver-29483178-9ftsj\" (UID: \"91a2da3f-e2b3-4a48-a8b6-47de1db2c643\") " pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.501021 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:00 crc kubenswrapper[5113]: I0121 10:18:00.952605 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-9ftsj"] Jan 21 10:18:01 crc kubenswrapper[5113]: I0121 10:18:01.837798 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" event={"ID":"91a2da3f-e2b3-4a48-a8b6-47de1db2c643","Type":"ContainerStarted","Data":"e03074c6aebe394120cf19df9fe0718f2864b270481fd92b59b1f499c4dbbfa5"} Jan 21 10:18:02 crc kubenswrapper[5113]: I0121 10:18:02.848562 5113 generic.go:358] "Generic (PLEG): container finished" podID="91a2da3f-e2b3-4a48-a8b6-47de1db2c643" containerID="5ffd52d476c01095090520ff3da3fb49708bad1c1076cc6eca2e3f3832e6f1c3" exitCode=0 Jan 21 10:18:02 crc kubenswrapper[5113]: I0121 10:18:02.858228 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" event={"ID":"91a2da3f-e2b3-4a48-a8b6-47de1db2c643","Type":"ContainerDied","Data":"5ffd52d476c01095090520ff3da3fb49708bad1c1076cc6eca2e3f3832e6f1c3"} Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.172814 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.277302 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbp4j\" (UniqueName: \"kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j\") pod \"91a2da3f-e2b3-4a48-a8b6-47de1db2c643\" (UID: \"91a2da3f-e2b3-4a48-a8b6-47de1db2c643\") " Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.284587 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j" (OuterVolumeSpecName: "kube-api-access-mbp4j") pod "91a2da3f-e2b3-4a48-a8b6-47de1db2c643" (UID: "91a2da3f-e2b3-4a48-a8b6-47de1db2c643"). InnerVolumeSpecName "kube-api-access-mbp4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.378967 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mbp4j\" (UniqueName: \"kubernetes.io/projected/91a2da3f-e2b3-4a48-a8b6-47de1db2c643-kube-api-access-mbp4j\") on node \"crc\" DevicePath \"\"" Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.869726 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.869785 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483178-9ftsj" event={"ID":"91a2da3f-e2b3-4a48-a8b6-47de1db2c643","Type":"ContainerDied","Data":"e03074c6aebe394120cf19df9fe0718f2864b270481fd92b59b1f499c4dbbfa5"} Jan 21 10:18:04 crc kubenswrapper[5113]: I0121 10:18:04.870217 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03074c6aebe394120cf19df9fe0718f2864b270481fd92b59b1f499c4dbbfa5" Jan 21 10:18:05 crc kubenswrapper[5113]: I0121 10:18:05.275475 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-rt5rk"] Jan 21 10:18:05 crc kubenswrapper[5113]: I0121 10:18:05.285785 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483172-rt5rk"] Jan 21 10:18:06 crc kubenswrapper[5113]: I0121 10:18:06.856190 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40989b1d-8b29-4203-8b56-2f9d9043e609" path="/var/lib/kubelet/pods/40989b1d-8b29-4203-8b56-2f9d9043e609/volumes" Jan 21 10:18:11 crc kubenswrapper[5113]: I0121 10:18:11.844913 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:18:11 crc kubenswrapper[5113]: E0121 10:18:11.845864 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:18:22 crc kubenswrapper[5113]: I0121 10:18:22.843835 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:18:22 crc kubenswrapper[5113]: E0121 10:18:22.845710 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:18:35 crc kubenswrapper[5113]: I0121 10:18:35.844222 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:18:35 crc kubenswrapper[5113]: E0121 10:18:35.845499 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:18:47 crc kubenswrapper[5113]: I0121 10:18:47.844287 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:18:47 crc kubenswrapper[5113]: E0121 10:18:47.845448 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:19:00 crc kubenswrapper[5113]: I0121 10:19:00.855317 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:19:00 crc kubenswrapper[5113]: E0121 10:19:00.856572 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:19:03 crc kubenswrapper[5113]: I0121 10:19:03.993044 5113 scope.go:117] "RemoveContainer" containerID="1c0328aa303f1c6405f40c0228e21970976ef4361d78b75e9f52634136321bde" Jan 21 10:19:15 crc kubenswrapper[5113]: I0121 10:19:15.843845 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:19:15 crc kubenswrapper[5113]: E0121 10:19:15.845847 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:19:30 crc kubenswrapper[5113]: I0121 10:19:30.850104 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:19:30 crc kubenswrapper[5113]: E0121 10:19:30.851237 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:19:41 crc kubenswrapper[5113]: I0121 10:19:41.844206 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:19:41 crc kubenswrapper[5113]: E0121 10:19:41.845297 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:19:56 crc kubenswrapper[5113]: I0121 10:19:56.843307 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:19:56 crc kubenswrapper[5113]: E0121 10:19:56.844569 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.163721 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483180-lq28x"] Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.165505 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="91a2da3f-e2b3-4a48-a8b6-47de1db2c643" containerName="oc" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.165542 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a2da3f-e2b3-4a48-a8b6-47de1db2c643" containerName="oc" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.165994 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="91a2da3f-e2b3-4a48-a8b6-47de1db2c643" containerName="oc" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.217154 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-lq28x"] Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.217307 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.219507 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.219837 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.236592 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.295078 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslrs\" (UniqueName: \"kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs\") pod \"auto-csr-approver-29483180-lq28x\" (UID: \"c712538c-7579-496c-bf7f-026a992acf4f\") " pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.396438 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jslrs\" (UniqueName: \"kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs\") pod \"auto-csr-approver-29483180-lq28x\" (UID: \"c712538c-7579-496c-bf7f-026a992acf4f\") " pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.444498 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jslrs\" (UniqueName: \"kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs\") pod \"auto-csr-approver-29483180-lq28x\" (UID: \"c712538c-7579-496c-bf7f-026a992acf4f\") " pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.543135 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.809475 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-lq28x"] Jan 21 10:20:00 crc kubenswrapper[5113]: I0121 10:20:00.814762 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:20:01 crc kubenswrapper[5113]: I0121 10:20:01.057891 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-lq28x" event={"ID":"c712538c-7579-496c-bf7f-026a992acf4f","Type":"ContainerStarted","Data":"d2243a2afbd3e0499272864b33e30d0702755c1e10c6da1e2de0b772f2594bc9"} Jan 21 10:20:03 crc kubenswrapper[5113]: I0121 10:20:03.083702 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-lq28x" event={"ID":"c712538c-7579-496c-bf7f-026a992acf4f","Type":"ContainerStarted","Data":"d8f669a33607b943c863e6cdce700a60168e56a8688c047a777f60a9a49cdeb4"} Jan 21 10:20:03 crc kubenswrapper[5113]: I0121 10:20:03.116004 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483180-lq28x" podStartSLOduration=1.446923432 podStartE2EDuration="3.115978981s" podCreationTimestamp="2026-01-21 10:20:00 +0000 UTC" firstStartedPulling="2026-01-21 10:20:00.814911131 +0000 UTC m=+3730.315738180" lastFinishedPulling="2026-01-21 10:20:02.48396665 +0000 UTC m=+3731.984793729" observedRunningTime="2026-01-21 10:20:03.110014525 +0000 UTC m=+3732.610841624" watchObservedRunningTime="2026-01-21 10:20:03.115978981 +0000 UTC m=+3732.616806060" Jan 21 10:20:04 crc kubenswrapper[5113]: I0121 10:20:04.096972 5113 generic.go:358] "Generic (PLEG): container finished" podID="c712538c-7579-496c-bf7f-026a992acf4f" containerID="d8f669a33607b943c863e6cdce700a60168e56a8688c047a777f60a9a49cdeb4" exitCode=0 Jan 21 10:20:04 crc kubenswrapper[5113]: I0121 10:20:04.097083 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-lq28x" event={"ID":"c712538c-7579-496c-bf7f-026a992acf4f","Type":"ContainerDied","Data":"d8f669a33607b943c863e6cdce700a60168e56a8688c047a777f60a9a49cdeb4"} Jan 21 10:20:05 crc kubenswrapper[5113]: I0121 10:20:05.448844 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:05 crc kubenswrapper[5113]: I0121 10:20:05.591313 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jslrs\" (UniqueName: \"kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs\") pod \"c712538c-7579-496c-bf7f-026a992acf4f\" (UID: \"c712538c-7579-496c-bf7f-026a992acf4f\") " Jan 21 10:20:05 crc kubenswrapper[5113]: I0121 10:20:05.611111 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs" (OuterVolumeSpecName: "kube-api-access-jslrs") pod "c712538c-7579-496c-bf7f-026a992acf4f" (UID: "c712538c-7579-496c-bf7f-026a992acf4f"). InnerVolumeSpecName "kube-api-access-jslrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:20:05 crc kubenswrapper[5113]: I0121 10:20:05.694148 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jslrs\" (UniqueName: \"kubernetes.io/projected/c712538c-7579-496c-bf7f-026a992acf4f-kube-api-access-jslrs\") on node \"crc\" DevicePath \"\"" Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.120044 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483180-lq28x" event={"ID":"c712538c-7579-496c-bf7f-026a992acf4f","Type":"ContainerDied","Data":"d2243a2afbd3e0499272864b33e30d0702755c1e10c6da1e2de0b772f2594bc9"} Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.120128 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2243a2afbd3e0499272864b33e30d0702755c1e10c6da1e2de0b772f2594bc9" Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.120251 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483180-lq28x" Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.190847 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-jxj8t"] Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.198552 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483174-jxj8t"] Jan 21 10:20:06 crc kubenswrapper[5113]: I0121 10:20:06.857457 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f290d176-4d18-448b-89bf-bcbca2e60113" path="/var/lib/kubelet/pods/f290d176-4d18-448b-89bf-bcbca2e60113/volumes" Jan 21 10:20:08 crc kubenswrapper[5113]: I0121 10:20:08.844491 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:20:08 crc kubenswrapper[5113]: E0121 10:20:08.844997 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:20:21 crc kubenswrapper[5113]: I0121 10:20:21.844418 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:20:21 crc kubenswrapper[5113]: E0121 10:20:21.845365 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:20:32 crc kubenswrapper[5113]: I0121 10:20:32.844708 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:20:32 crc kubenswrapper[5113]: E0121 10:20:32.846460 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:20:44 crc kubenswrapper[5113]: I0121 10:20:44.843934 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:20:44 crc kubenswrapper[5113]: E0121 10:20:44.845057 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:20:59 crc kubenswrapper[5113]: I0121 10:20:59.843913 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:20:59 crc kubenswrapper[5113]: E0121 10:20:59.844986 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:21:04 crc kubenswrapper[5113]: I0121 10:21:04.174910 5113 scope.go:117] "RemoveContainer" containerID="796fd580db1887868c2bf84ea620da929b6c5c9477b0e8594f4968f5bb77be8f" Jan 21 10:21:14 crc kubenswrapper[5113]: I0121 10:21:14.844338 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:21:14 crc kubenswrapper[5113]: E0121 10:21:14.845280 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:21:27 crc kubenswrapper[5113]: I0121 10:21:27.843855 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:21:27 crc kubenswrapper[5113]: E0121 10:21:27.844883 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:21:42 crc kubenswrapper[5113]: I0121 10:21:42.843454 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:21:42 crc kubenswrapper[5113]: E0121 10:21:42.844446 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:21:56 crc kubenswrapper[5113]: I0121 10:21:56.845112 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:21:56 crc kubenswrapper[5113]: E0121 10:21:56.846812 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.163610 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483182-wf82f"] Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.165425 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c712538c-7579-496c-bf7f-026a992acf4f" containerName="oc" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.165474 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="c712538c-7579-496c-bf7f-026a992acf4f" containerName="oc" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.165774 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="c712538c-7579-496c-bf7f-026a992acf4f" containerName="oc" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.171596 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.175629 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.176016 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.176250 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.186073 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-wf82f"] Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.240284 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvcvt\" (UniqueName: \"kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt\") pod \"auto-csr-approver-29483182-wf82f\" (UID: \"23f765f5-93ee-4740-b8d7-b66bcf1eeb50\") " pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.342067 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wvcvt\" (UniqueName: \"kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt\") pod \"auto-csr-approver-29483182-wf82f\" (UID: \"23f765f5-93ee-4740-b8d7-b66bcf1eeb50\") " pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.364930 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvcvt\" (UniqueName: \"kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt\") pod \"auto-csr-approver-29483182-wf82f\" (UID: \"23f765f5-93ee-4740-b8d7-b66bcf1eeb50\") " pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:00 crc kubenswrapper[5113]: I0121 10:22:00.496549 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:01 crc kubenswrapper[5113]: I0121 10:22:01.030537 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-wf82f"] Jan 21 10:22:01 crc kubenswrapper[5113]: I0121 10:22:01.282484 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-wf82f" event={"ID":"23f765f5-93ee-4740-b8d7-b66bcf1eeb50","Type":"ContainerStarted","Data":"56016ab78806dc23b22d961f0b83d3f8eba95989e62cae9784f48cbaae089c92"} Jan 21 10:22:03 crc kubenswrapper[5113]: I0121 10:22:03.327057 5113 generic.go:358] "Generic (PLEG): container finished" podID="23f765f5-93ee-4740-b8d7-b66bcf1eeb50" containerID="ca2dad778989735125b13beeddf57b7e12d113aa3d58269f56904a2ecb4a4b04" exitCode=0 Jan 21 10:22:03 crc kubenswrapper[5113]: I0121 10:22:03.327298 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-wf82f" event={"ID":"23f765f5-93ee-4740-b8d7-b66bcf1eeb50","Type":"ContainerDied","Data":"ca2dad778989735125b13beeddf57b7e12d113aa3d58269f56904a2ecb4a4b04"} Jan 21 10:22:04 crc kubenswrapper[5113]: I0121 10:22:04.724586 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:04 crc kubenswrapper[5113]: I0121 10:22:04.856714 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvcvt\" (UniqueName: \"kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt\") pod \"23f765f5-93ee-4740-b8d7-b66bcf1eeb50\" (UID: \"23f765f5-93ee-4740-b8d7-b66bcf1eeb50\") " Jan 21 10:22:04 crc kubenswrapper[5113]: I0121 10:22:04.865917 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt" (OuterVolumeSpecName: "kube-api-access-wvcvt") pod "23f765f5-93ee-4740-b8d7-b66bcf1eeb50" (UID: "23f765f5-93ee-4740-b8d7-b66bcf1eeb50"). InnerVolumeSpecName "kube-api-access-wvcvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:22:04 crc kubenswrapper[5113]: I0121 10:22:04.960485 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wvcvt\" (UniqueName: \"kubernetes.io/projected/23f765f5-93ee-4740-b8d7-b66bcf1eeb50-kube-api-access-wvcvt\") on node \"crc\" DevicePath \"\"" Jan 21 10:22:05 crc kubenswrapper[5113]: I0121 10:22:05.347440 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483182-wf82f" event={"ID":"23f765f5-93ee-4740-b8d7-b66bcf1eeb50","Type":"ContainerDied","Data":"56016ab78806dc23b22d961f0b83d3f8eba95989e62cae9784f48cbaae089c92"} Jan 21 10:22:05 crc kubenswrapper[5113]: I0121 10:22:05.347490 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483182-wf82f" Jan 21 10:22:05 crc kubenswrapper[5113]: I0121 10:22:05.348543 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56016ab78806dc23b22d961f0b83d3f8eba95989e62cae9784f48cbaae089c92" Jan 21 10:22:05 crc kubenswrapper[5113]: I0121 10:22:05.855165 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-pcbgh"] Jan 21 10:22:05 crc kubenswrapper[5113]: I0121 10:22:05.863709 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483176-pcbgh"] Jan 21 10:22:06 crc kubenswrapper[5113]: I0121 10:22:06.861285 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce39ddd-5e7a-4843-961b-964a2cccfe1d" path="/var/lib/kubelet/pods/fce39ddd-5e7a-4843-961b-964a2cccfe1d/volumes" Jan 21 10:22:10 crc kubenswrapper[5113]: I0121 10:22:10.860469 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:22:10 crc kubenswrapper[5113]: E0121 10:22:10.861060 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:22:21 crc kubenswrapper[5113]: I0121 10:22:21.844542 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:22:21 crc kubenswrapper[5113]: E0121 10:22:21.845438 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:22:34 crc kubenswrapper[5113]: I0121 10:22:34.843267 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:22:34 crc kubenswrapper[5113]: E0121 10:22:34.845923 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:22:45 crc kubenswrapper[5113]: I0121 10:22:45.843365 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:22:45 crc kubenswrapper[5113]: E0121 10:22:45.844248 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:22:52 crc kubenswrapper[5113]: I0121 10:22:52.700778 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:22:52 crc kubenswrapper[5113]: I0121 10:22:52.701035 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:22:52 crc kubenswrapper[5113]: I0121 10:22:52.718841 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:22:52 crc kubenswrapper[5113]: I0121 10:22:52.719405 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:22:57 crc kubenswrapper[5113]: I0121 10:22:57.843017 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:22:57 crc kubenswrapper[5113]: E0121 10:22:57.843540 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:23:04 crc kubenswrapper[5113]: I0121 10:23:04.379826 5113 scope.go:117] "RemoveContainer" containerID="f465d21fe2143aee264cd70018485e5506f4329322a510b03eb7e3de98ee0a91" Jan 21 10:23:09 crc kubenswrapper[5113]: I0121 10:23:09.843912 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:23:10 crc kubenswrapper[5113]: I0121 10:23:10.932485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5"} Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.147055 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483184-kpnqk"] Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.148325 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="23f765f5-93ee-4740-b8d7-b66bcf1eeb50" containerName="oc" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.148338 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f765f5-93ee-4740-b8d7-b66bcf1eeb50" containerName="oc" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.148467 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="23f765f5-93ee-4740-b8d7-b66bcf1eeb50" containerName="oc" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.151929 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.155130 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.155376 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.155910 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.165658 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-kpnqk"] Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.291820 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks4lq\" (UniqueName: \"kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq\") pod \"auto-csr-approver-29483184-kpnqk\" (UID: \"d3ce9f39-0eaa-4641-a904-ddf81004f047\") " pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.395605 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ks4lq\" (UniqueName: \"kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq\") pod \"auto-csr-approver-29483184-kpnqk\" (UID: \"d3ce9f39-0eaa-4641-a904-ddf81004f047\") " pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.452016 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks4lq\" (UniqueName: \"kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq\") pod \"auto-csr-approver-29483184-kpnqk\" (UID: \"d3ce9f39-0eaa-4641-a904-ddf81004f047\") " pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.510311 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:00 crc kubenswrapper[5113]: I0121 10:24:00.840861 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-kpnqk"] Jan 21 10:24:00 crc kubenswrapper[5113]: W0121 10:24:00.863620 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ce9f39_0eaa_4641_a904_ddf81004f047.slice/crio-139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13 WatchSource:0}: Error finding container 139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13: Status 404 returned error can't find the container with id 139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13 Jan 21 10:24:01 crc kubenswrapper[5113]: I0121 10:24:01.406768 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" event={"ID":"d3ce9f39-0eaa-4641-a904-ddf81004f047","Type":"ContainerStarted","Data":"139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13"} Jan 21 10:24:02 crc kubenswrapper[5113]: I0121 10:24:02.419352 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" event={"ID":"d3ce9f39-0eaa-4641-a904-ddf81004f047","Type":"ContainerStarted","Data":"adc8ea5491c5e6f234ac25df2b2155475ad5c8e4d03ebc8eca99fe65977a9cf2"} Jan 21 10:24:02 crc kubenswrapper[5113]: I0121 10:24:02.440788 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" podStartSLOduration=1.475692324 podStartE2EDuration="2.440756397s" podCreationTimestamp="2026-01-21 10:24:00 +0000 UTC" firstStartedPulling="2026-01-21 10:24:00.86771807 +0000 UTC m=+3970.368545139" lastFinishedPulling="2026-01-21 10:24:01.832782163 +0000 UTC m=+3971.333609212" observedRunningTime="2026-01-21 10:24:02.435472818 +0000 UTC m=+3971.936299887" watchObservedRunningTime="2026-01-21 10:24:02.440756397 +0000 UTC m=+3971.941583456" Jan 21 10:24:03 crc kubenswrapper[5113]: I0121 10:24:03.432092 5113 generic.go:358] "Generic (PLEG): container finished" podID="d3ce9f39-0eaa-4641-a904-ddf81004f047" containerID="adc8ea5491c5e6f234ac25df2b2155475ad5c8e4d03ebc8eca99fe65977a9cf2" exitCode=0 Jan 21 10:24:03 crc kubenswrapper[5113]: I0121 10:24:03.432181 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" event={"ID":"d3ce9f39-0eaa-4641-a904-ddf81004f047","Type":"ContainerDied","Data":"adc8ea5491c5e6f234ac25df2b2155475ad5c8e4d03ebc8eca99fe65977a9cf2"} Jan 21 10:24:04 crc kubenswrapper[5113]: I0121 10:24:04.732337 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:04 crc kubenswrapper[5113]: I0121 10:24:04.776272 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks4lq\" (UniqueName: \"kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq\") pod \"d3ce9f39-0eaa-4641-a904-ddf81004f047\" (UID: \"d3ce9f39-0eaa-4641-a904-ddf81004f047\") " Jan 21 10:24:04 crc kubenswrapper[5113]: I0121 10:24:04.786694 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq" (OuterVolumeSpecName: "kube-api-access-ks4lq") pod "d3ce9f39-0eaa-4641-a904-ddf81004f047" (UID: "d3ce9f39-0eaa-4641-a904-ddf81004f047"). InnerVolumeSpecName "kube-api-access-ks4lq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:24:04 crc kubenswrapper[5113]: I0121 10:24:04.878863 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks4lq\" (UniqueName: \"kubernetes.io/projected/d3ce9f39-0eaa-4641-a904-ddf81004f047-kube-api-access-ks4lq\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:05 crc kubenswrapper[5113]: I0121 10:24:05.452167 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" Jan 21 10:24:05 crc kubenswrapper[5113]: I0121 10:24:05.452179 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483184-kpnqk" event={"ID":"d3ce9f39-0eaa-4641-a904-ddf81004f047","Type":"ContainerDied","Data":"139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13"} Jan 21 10:24:05 crc kubenswrapper[5113]: I0121 10:24:05.452672 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="139a94c4b501447264c71297a6820b21bf4252b75d1fe1f989802531e7f77a13" Jan 21 10:24:05 crc kubenswrapper[5113]: I0121 10:24:05.510423 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-9ftsj"] Jan 21 10:24:05 crc kubenswrapper[5113]: I0121 10:24:05.518579 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483178-9ftsj"] Jan 21 10:24:06 crc kubenswrapper[5113]: I0121 10:24:06.854185 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a2da3f-e2b3-4a48-a8b6-47de1db2c643" path="/var/lib/kubelet/pods/91a2da3f-e2b3-4a48-a8b6-47de1db2c643/volumes" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.286946 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.288567 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d3ce9f39-0eaa-4641-a904-ddf81004f047" containerName="oc" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.288587 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce9f39-0eaa-4641-a904-ddf81004f047" containerName="oc" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.288805 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d3ce9f39-0eaa-4641-a904-ddf81004f047" containerName="oc" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.304350 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.304625 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.425349 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.425411 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbdbg\" (UniqueName: \"kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.425521 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.527387 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.527809 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbdbg\" (UniqueName: \"kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.527844 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.528146 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.528309 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.556205 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbdbg\" (UniqueName: \"kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg\") pod \"certified-operators-dn65n\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:40 crc kubenswrapper[5113]: I0121 10:24:40.645851 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:41 crc kubenswrapper[5113]: I0121 10:24:41.091279 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:41 crc kubenswrapper[5113]: W0121 10:24:41.098375 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1a19f39_340b_4995_8955_51e1de760512.slice/crio-5e70a13b5ecb34784e82990f5e9835b8ec5611520daae6b7e77b7d55a5435758 WatchSource:0}: Error finding container 5e70a13b5ecb34784e82990f5e9835b8ec5611520daae6b7e77b7d55a5435758: Status 404 returned error can't find the container with id 5e70a13b5ecb34784e82990f5e9835b8ec5611520daae6b7e77b7d55a5435758 Jan 21 10:24:41 crc kubenswrapper[5113]: I0121 10:24:41.847383 5113 generic.go:358] "Generic (PLEG): container finished" podID="d1a19f39-340b-4995-8955-51e1de760512" containerID="b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786" exitCode=0 Jan 21 10:24:41 crc kubenswrapper[5113]: I0121 10:24:41.847564 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerDied","Data":"b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786"} Jan 21 10:24:41 crc kubenswrapper[5113]: I0121 10:24:41.847589 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerStarted","Data":"5e70a13b5ecb34784e82990f5e9835b8ec5611520daae6b7e77b7d55a5435758"} Jan 21 10:24:42 crc kubenswrapper[5113]: I0121 10:24:42.856758 5113 generic.go:358] "Generic (PLEG): container finished" podID="d1a19f39-340b-4995-8955-51e1de760512" containerID="ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867" exitCode=0 Jan 21 10:24:42 crc kubenswrapper[5113]: I0121 10:24:42.857485 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerDied","Data":"ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867"} Jan 21 10:24:43 crc kubenswrapper[5113]: I0121 10:24:43.866474 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerStarted","Data":"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41"} Jan 21 10:24:43 crc kubenswrapper[5113]: I0121 10:24:43.898591 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dn65n" podStartSLOduration=3.35892805 podStartE2EDuration="3.898568144s" podCreationTimestamp="2026-01-21 10:24:40 +0000 UTC" firstStartedPulling="2026-01-21 10:24:41.848705408 +0000 UTC m=+4011.349532487" lastFinishedPulling="2026-01-21 10:24:42.388345532 +0000 UTC m=+4011.889172581" observedRunningTime="2026-01-21 10:24:43.897548425 +0000 UTC m=+4013.398375484" watchObservedRunningTime="2026-01-21 10:24:43.898568144 +0000 UTC m=+4013.399395193" Jan 21 10:24:50 crc kubenswrapper[5113]: I0121 10:24:50.646872 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:50 crc kubenswrapper[5113]: I0121 10:24:50.647804 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:50 crc kubenswrapper[5113]: I0121 10:24:50.726635 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:50 crc kubenswrapper[5113]: I0121 10:24:50.998346 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:51 crc kubenswrapper[5113]: I0121 10:24:51.043489 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:52 crc kubenswrapper[5113]: I0121 10:24:52.955639 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dn65n" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="registry-server" containerID="cri-o://603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41" gracePeriod=2 Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.488909 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.575864 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities\") pod \"d1a19f39-340b-4995-8955-51e1de760512\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.575939 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbdbg\" (UniqueName: \"kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg\") pod \"d1a19f39-340b-4995-8955-51e1de760512\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.576041 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content\") pod \"d1a19f39-340b-4995-8955-51e1de760512\" (UID: \"d1a19f39-340b-4995-8955-51e1de760512\") " Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.582359 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg" (OuterVolumeSpecName: "kube-api-access-rbdbg") pod "d1a19f39-340b-4995-8955-51e1de760512" (UID: "d1a19f39-340b-4995-8955-51e1de760512"). InnerVolumeSpecName "kube-api-access-rbdbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.590435 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities" (OuterVolumeSpecName: "utilities") pod "d1a19f39-340b-4995-8955-51e1de760512" (UID: "d1a19f39-340b-4995-8955-51e1de760512"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.609357 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1a19f39-340b-4995-8955-51e1de760512" (UID: "d1a19f39-340b-4995-8955-51e1de760512"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.678341 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.678415 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rbdbg\" (UniqueName: \"kubernetes.io/projected/d1a19f39-340b-4995-8955-51e1de760512-kube-api-access-rbdbg\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.678439 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1a19f39-340b-4995-8955-51e1de760512-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.988022 5113 generic.go:358] "Generic (PLEG): container finished" podID="d1a19f39-340b-4995-8955-51e1de760512" containerID="603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41" exitCode=0 Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.988104 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerDied","Data":"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41"} Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.988129 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dn65n" event={"ID":"d1a19f39-340b-4995-8955-51e1de760512","Type":"ContainerDied","Data":"5e70a13b5ecb34784e82990f5e9835b8ec5611520daae6b7e77b7d55a5435758"} Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.988145 5113 scope.go:117] "RemoveContainer" containerID="603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41" Jan 21 10:24:54 crc kubenswrapper[5113]: I0121 10:24:54.988195 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dn65n" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.023381 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.024497 5113 scope.go:117] "RemoveContainer" containerID="ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.029702 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dn65n"] Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.051192 5113 scope.go:117] "RemoveContainer" containerID="b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.088717 5113 scope.go:117] "RemoveContainer" containerID="603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41" Jan 21 10:24:55 crc kubenswrapper[5113]: E0121 10:24:55.089069 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41\": container with ID starting with 603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41 not found: ID does not exist" containerID="603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.089111 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41"} err="failed to get container status \"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41\": rpc error: code = NotFound desc = could not find container \"603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41\": container with ID starting with 603a74cbcb3e6c5c25548503304a31e38f86af0c4ad8d71ae2cbab436b52fe41 not found: ID does not exist" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.089137 5113 scope.go:117] "RemoveContainer" containerID="ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867" Jan 21 10:24:55 crc kubenswrapper[5113]: E0121 10:24:55.089512 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867\": container with ID starting with ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867 not found: ID does not exist" containerID="ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.089568 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867"} err="failed to get container status \"ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867\": rpc error: code = NotFound desc = could not find container \"ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867\": container with ID starting with ee7078e7403a1f9c2db9a8695ceeca68abd273004674ac701daf419108248867 not found: ID does not exist" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.089605 5113 scope.go:117] "RemoveContainer" containerID="b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786" Jan 21 10:24:55 crc kubenswrapper[5113]: E0121 10:24:55.090025 5113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786\": container with ID starting with b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786 not found: ID does not exist" containerID="b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786" Jan 21 10:24:55 crc kubenswrapper[5113]: I0121 10:24:55.090052 5113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786"} err="failed to get container status \"b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786\": rpc error: code = NotFound desc = could not find container \"b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786\": container with ID starting with b727968e7c6d16be8b947a0c6f543554fbe341ea9a5bdf62c424b6072754d786 not found: ID does not exist" Jan 21 10:24:56 crc kubenswrapper[5113]: I0121 10:24:56.860479 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1a19f39-340b-4995-8955-51e1de760512" path="/var/lib/kubelet/pods/d1a19f39-340b-4995-8955-51e1de760512/volumes" Jan 21 10:25:04 crc kubenswrapper[5113]: I0121 10:25:04.546560 5113 scope.go:117] "RemoveContainer" containerID="5ffd52d476c01095090520ff3da3fb49708bad1c1076cc6eca2e3f3832e6f1c3" Jan 21 10:25:28 crc kubenswrapper[5113]: I0121 10:25:28.340515 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:25:28 crc kubenswrapper[5113]: I0121 10:25:28.341096 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:25:58 crc kubenswrapper[5113]: I0121 10:25:58.340520 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:25:58 crc kubenswrapper[5113]: I0121 10:25:58.341405 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.147835 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483186-dczc2"] Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149045 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="extract-utilities" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149063 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="extract-utilities" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149086 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="extract-content" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149093 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="extract-content" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149117 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149125 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.149303 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d1a19f39-340b-4995-8955-51e1de760512" containerName="registry-server" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.429586 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-dczc2"] Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.429824 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.433226 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.433460 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.433467 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.604946 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l627w\" (UniqueName: \"kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w\") pod \"auto-csr-approver-29483186-dczc2\" (UID: \"e0547695-a95c-4a56-b128-6dc04bafb957\") " pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.706024 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l627w\" (UniqueName: \"kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w\") pod \"auto-csr-approver-29483186-dczc2\" (UID: \"e0547695-a95c-4a56-b128-6dc04bafb957\") " pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.753461 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l627w\" (UniqueName: \"kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w\") pod \"auto-csr-approver-29483186-dczc2\" (UID: \"e0547695-a95c-4a56-b128-6dc04bafb957\") " pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:00 crc kubenswrapper[5113]: I0121 10:26:00.769768 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:01 crc kubenswrapper[5113]: I0121 10:26:01.019554 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483186-dczc2"] Jan 21 10:26:01 crc kubenswrapper[5113]: I0121 10:26:01.029255 5113 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:26:01 crc kubenswrapper[5113]: I0121 10:26:01.606977 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-dczc2" event={"ID":"e0547695-a95c-4a56-b128-6dc04bafb957","Type":"ContainerStarted","Data":"ba94aea8336955ac78031cf0dd2ed0cb3da9090c9642f61ee07c9787c96be05a"} Jan 21 10:26:02 crc kubenswrapper[5113]: I0121 10:26:02.618460 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-dczc2" event={"ID":"e0547695-a95c-4a56-b128-6dc04bafb957","Type":"ContainerStarted","Data":"ef13b92fc5a2a3bded1b06069862aa19382c7fbd4e613661033065a540da5493"} Jan 21 10:26:02 crc kubenswrapper[5113]: I0121 10:26:02.634837 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29483186-dczc2" podStartSLOduration=1.791018909 podStartE2EDuration="2.634811666s" podCreationTimestamp="2026-01-21 10:26:00 +0000 UTC" firstStartedPulling="2026-01-21 10:26:01.029467896 +0000 UTC m=+4090.530294945" lastFinishedPulling="2026-01-21 10:26:01.873260653 +0000 UTC m=+4091.374087702" observedRunningTime="2026-01-21 10:26:02.63284632 +0000 UTC m=+4092.133673369" watchObservedRunningTime="2026-01-21 10:26:02.634811666 +0000 UTC m=+4092.135638725" Jan 21 10:26:03 crc kubenswrapper[5113]: I0121 10:26:03.630395 5113 generic.go:358] "Generic (PLEG): container finished" podID="e0547695-a95c-4a56-b128-6dc04bafb957" containerID="ef13b92fc5a2a3bded1b06069862aa19382c7fbd4e613661033065a540da5493" exitCode=0 Jan 21 10:26:03 crc kubenswrapper[5113]: I0121 10:26:03.630535 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-dczc2" event={"ID":"e0547695-a95c-4a56-b128-6dc04bafb957","Type":"ContainerDied","Data":"ef13b92fc5a2a3bded1b06069862aa19382c7fbd4e613661033065a540da5493"} Jan 21 10:26:04 crc kubenswrapper[5113]: I0121 10:26:04.943603 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:04 crc kubenswrapper[5113]: I0121 10:26:04.974156 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l627w\" (UniqueName: \"kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w\") pod \"e0547695-a95c-4a56-b128-6dc04bafb957\" (UID: \"e0547695-a95c-4a56-b128-6dc04bafb957\") " Jan 21 10:26:04 crc kubenswrapper[5113]: I0121 10:26:04.987413 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w" (OuterVolumeSpecName: "kube-api-access-l627w") pod "e0547695-a95c-4a56-b128-6dc04bafb957" (UID: "e0547695-a95c-4a56-b128-6dc04bafb957"). InnerVolumeSpecName "kube-api-access-l627w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.078518 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l627w\" (UniqueName: \"kubernetes.io/projected/e0547695-a95c-4a56-b128-6dc04bafb957-kube-api-access-l627w\") on node \"crc\" DevicePath \"\"" Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.650798 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483186-dczc2" event={"ID":"e0547695-a95c-4a56-b128-6dc04bafb957","Type":"ContainerDied","Data":"ba94aea8336955ac78031cf0dd2ed0cb3da9090c9642f61ee07c9787c96be05a"} Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.651128 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba94aea8336955ac78031cf0dd2ed0cb3da9090c9642f61ee07c9787c96be05a" Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.650812 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483186-dczc2" Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.706840 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-lq28x"] Jan 21 10:26:05 crc kubenswrapper[5113]: I0121 10:26:05.714897 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483180-lq28x"] Jan 21 10:26:06 crc kubenswrapper[5113]: I0121 10:26:06.854889 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c712538c-7579-496c-bf7f-026a992acf4f" path="/var/lib/kubelet/pods/c712538c-7579-496c-bf7f-026a992acf4f/volumes" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.479375 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.481277 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e0547695-a95c-4a56-b128-6dc04bafb957" containerName="oc" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.481298 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0547695-a95c-4a56-b128-6dc04bafb957" containerName="oc" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.481532 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="e0547695-a95c-4a56-b128-6dc04bafb957" containerName="oc" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.639399 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.640123 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.721087 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.721177 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.721270 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8mn\" (UniqueName: \"kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.823314 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.823420 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.823459 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8w8mn\" (UniqueName: \"kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.824218 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.824227 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.846266 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w8mn\" (UniqueName: \"kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn\") pod \"redhat-operators-jb74v\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:27 crc kubenswrapper[5113]: I0121 10:26:27.971501 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.340502 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.340589 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.340652 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.341807 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.341919 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5" gracePeriod=600 Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.487298 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.869912 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5" exitCode=0 Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.870713 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5"} Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.870873 5113 scope.go:117] "RemoveContainer" containerID="7a222ced9e6ada1383ad249969c0fe50f3b46dad64f151d2be479e40f9e59f23" Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.873471 5113 generic.go:358] "Generic (PLEG): container finished" podID="75355303-9662-41a7-9b9b-fce22cccad5c" containerID="5800835b96a347a1c592340ad3ddde83f4ce2e5bc4435a8302e2642a96e9b834" exitCode=0 Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.873687 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerDied","Data":"5800835b96a347a1c592340ad3ddde83f4ce2e5bc4435a8302e2642a96e9b834"} Jan 21 10:26:28 crc kubenswrapper[5113]: I0121 10:26:28.873762 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerStarted","Data":"3857488a5df0aaed06a1fd8649e4125f68479f7cc7b217b4e70650724b5725c1"} Jan 21 10:26:29 crc kubenswrapper[5113]: I0121 10:26:29.886115 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerStarted","Data":"8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9"} Jan 21 10:26:30 crc kubenswrapper[5113]: I0121 10:26:30.897250 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerStarted","Data":"86521d264eb777ed88ba84be228b5ad754e6e739eb63361a3ff8b52043ee3524"} Jan 21 10:26:31 crc kubenswrapper[5113]: I0121 10:26:31.914035 5113 generic.go:358] "Generic (PLEG): container finished" podID="75355303-9662-41a7-9b9b-fce22cccad5c" containerID="86521d264eb777ed88ba84be228b5ad754e6e739eb63361a3ff8b52043ee3524" exitCode=0 Jan 21 10:26:31 crc kubenswrapper[5113]: I0121 10:26:31.914139 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerDied","Data":"86521d264eb777ed88ba84be228b5ad754e6e739eb63361a3ff8b52043ee3524"} Jan 21 10:26:32 crc kubenswrapper[5113]: I0121 10:26:32.924860 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerStarted","Data":"98e8ee38ad74dac35430bf8b34f858638bcec4f76b70c32b0441061624aca880"} Jan 21 10:26:32 crc kubenswrapper[5113]: I0121 10:26:32.949462 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jb74v" podStartSLOduration=4.448069391 podStartE2EDuration="5.949441592s" podCreationTimestamp="2026-01-21 10:26:27 +0000 UTC" firstStartedPulling="2026-01-21 10:26:28.874722598 +0000 UTC m=+4118.375549647" lastFinishedPulling="2026-01-21 10:26:30.376094789 +0000 UTC m=+4119.876921848" observedRunningTime="2026-01-21 10:26:32.944967436 +0000 UTC m=+4122.445794495" watchObservedRunningTime="2026-01-21 10:26:32.949441592 +0000 UTC m=+4122.450268641" Jan 21 10:26:37 crc kubenswrapper[5113]: I0121 10:26:37.972038 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:37 crc kubenswrapper[5113]: I0121 10:26:37.973462 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:38 crc kubenswrapper[5113]: I0121 10:26:38.052886 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:39 crc kubenswrapper[5113]: I0121 10:26:39.034596 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:39 crc kubenswrapper[5113]: I0121 10:26:39.095207 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:40 crc kubenswrapper[5113]: I0121 10:26:40.998063 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jb74v" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="registry-server" containerID="cri-o://98e8ee38ad74dac35430bf8b34f858638bcec4f76b70c32b0441061624aca880" gracePeriod=2 Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.049529 5113 generic.go:358] "Generic (PLEG): container finished" podID="75355303-9662-41a7-9b9b-fce22cccad5c" containerID="98e8ee38ad74dac35430bf8b34f858638bcec4f76b70c32b0441061624aca880" exitCode=0 Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.049627 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerDied","Data":"98e8ee38ad74dac35430bf8b34f858638bcec4f76b70c32b0441061624aca880"} Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.396069 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.521376 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities\") pod \"75355303-9662-41a7-9b9b-fce22cccad5c\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.521499 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content\") pod \"75355303-9662-41a7-9b9b-fce22cccad5c\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.521621 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w8mn\" (UniqueName: \"kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn\") pod \"75355303-9662-41a7-9b9b-fce22cccad5c\" (UID: \"75355303-9662-41a7-9b9b-fce22cccad5c\") " Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.522370 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities" (OuterVolumeSpecName: "utilities") pod "75355303-9662-41a7-9b9b-fce22cccad5c" (UID: "75355303-9662-41a7-9b9b-fce22cccad5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.533855 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn" (OuterVolumeSpecName: "kube-api-access-8w8mn") pod "75355303-9662-41a7-9b9b-fce22cccad5c" (UID: "75355303-9662-41a7-9b9b-fce22cccad5c"). InnerVolumeSpecName "kube-api-access-8w8mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.624032 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8w8mn\" (UniqueName: \"kubernetes.io/projected/75355303-9662-41a7-9b9b-fce22cccad5c-kube-api-access-8w8mn\") on node \"crc\" DevicePath \"\"" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.624074 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.680495 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75355303-9662-41a7-9b9b-fce22cccad5c" (UID: "75355303-9662-41a7-9b9b-fce22cccad5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:26:44 crc kubenswrapper[5113]: I0121 10:26:44.725678 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75355303-9662-41a7-9b9b-fce22cccad5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.066619 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jb74v" Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.066662 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jb74v" event={"ID":"75355303-9662-41a7-9b9b-fce22cccad5c","Type":"ContainerDied","Data":"3857488a5df0aaed06a1fd8649e4125f68479f7cc7b217b4e70650724b5725c1"} Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.067446 5113 scope.go:117] "RemoveContainer" containerID="98e8ee38ad74dac35430bf8b34f858638bcec4f76b70c32b0441061624aca880" Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.109482 5113 scope.go:117] "RemoveContainer" containerID="86521d264eb777ed88ba84be228b5ad754e6e739eb63361a3ff8b52043ee3524" Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.113424 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.124496 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jb74v"] Jan 21 10:26:45 crc kubenswrapper[5113]: I0121 10:26:45.163175 5113 scope.go:117] "RemoveContainer" containerID="5800835b96a347a1c592340ad3ddde83f4ce2e5bc4435a8302e2642a96e9b834" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.649862 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652224 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="extract-utilities" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652275 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="extract-utilities" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652324 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="extract-content" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652342 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="extract-content" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652409 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="registry-server" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652427 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="registry-server" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.652935 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" containerName="registry-server" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.807507 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.807774 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.853729 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75355303-9662-41a7-9b9b-fce22cccad5c" path="/var/lib/kubelet/pods/75355303-9662-41a7-9b9b-fce22cccad5c/volumes" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.885852 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.885988 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2lzg\" (UniqueName: \"kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.886058 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.987233 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v2lzg\" (UniqueName: \"kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.987338 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.987417 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.987984 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:46 crc kubenswrapper[5113]: I0121 10:26:46.988093 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:47 crc kubenswrapper[5113]: I0121 10:26:47.020898 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2lzg\" (UniqueName: \"kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg\") pod \"community-operators-vvhzr\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:47 crc kubenswrapper[5113]: I0121 10:26:47.125960 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:47 crc kubenswrapper[5113]: I0121 10:26:47.643350 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:26:48 crc kubenswrapper[5113]: I0121 10:26:48.096236 5113 generic.go:358] "Generic (PLEG): container finished" podID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerID="b1e5c64e6ebaed70d6ba02e04488e5607c6665c7bb8e0295cc3047a452352923" exitCode=0 Jan 21 10:26:48 crc kubenswrapper[5113]: I0121 10:26:48.096363 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerDied","Data":"b1e5c64e6ebaed70d6ba02e04488e5607c6665c7bb8e0295cc3047a452352923"} Jan 21 10:26:48 crc kubenswrapper[5113]: I0121 10:26:48.096836 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerStarted","Data":"2a44e8e1ac8f53f83d67b3e5a7cf56a3dea3ec03e7acb2514a90fb69c3eee67a"} Jan 21 10:26:51 crc kubenswrapper[5113]: I0121 10:26:51.133826 5113 generic.go:358] "Generic (PLEG): container finished" podID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerID="74128fd2676594da16dca6a956c3d0c1761be5cbe4ece35410e691c9bf6626f4" exitCode=0 Jan 21 10:26:51 crc kubenswrapper[5113]: I0121 10:26:51.134279 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerDied","Data":"74128fd2676594da16dca6a956c3d0c1761be5cbe4ece35410e691c9bf6626f4"} Jan 21 10:26:52 crc kubenswrapper[5113]: I0121 10:26:52.150626 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerStarted","Data":"29dd99acf69e50c42ef96950201d7eadf13985736e3d2b24af0fa839929e09e3"} Jan 21 10:26:52 crc kubenswrapper[5113]: I0121 10:26:52.179101 5113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vvhzr" podStartSLOduration=4.30242227 podStartE2EDuration="6.179082774s" podCreationTimestamp="2026-01-21 10:26:46 +0000 UTC" firstStartedPulling="2026-01-21 10:26:48.097553116 +0000 UTC m=+4137.598380175" lastFinishedPulling="2026-01-21 10:26:49.97421363 +0000 UTC m=+4139.475040679" observedRunningTime="2026-01-21 10:26:52.17398631 +0000 UTC m=+4141.674813389" watchObservedRunningTime="2026-01-21 10:26:52.179082774 +0000 UTC m=+4141.679909823" Jan 21 10:26:57 crc kubenswrapper[5113]: I0121 10:26:57.126276 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:57 crc kubenswrapper[5113]: I0121 10:26:57.127076 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:57 crc kubenswrapper[5113]: I0121 10:26:57.191833 5113 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:57 crc kubenswrapper[5113]: I0121 10:26:57.266006 5113 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:26:57 crc kubenswrapper[5113]: I0121 10:26:57.437527 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:26:59 crc kubenswrapper[5113]: I0121 10:26:59.217663 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vvhzr" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="registry-server" containerID="cri-o://29dd99acf69e50c42ef96950201d7eadf13985736e3d2b24af0fa839929e09e3" gracePeriod=2 Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.230857 5113 generic.go:358] "Generic (PLEG): container finished" podID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerID="29dd99acf69e50c42ef96950201d7eadf13985736e3d2b24af0fa839929e09e3" exitCode=0 Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.230998 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerDied","Data":"29dd99acf69e50c42ef96950201d7eadf13985736e3d2b24af0fa839929e09e3"} Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.231271 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vvhzr" event={"ID":"d2caf494-8cca-40a5-ac96-8d6538f2ccc6","Type":"ContainerDied","Data":"2a44e8e1ac8f53f83d67b3e5a7cf56a3dea3ec03e7acb2514a90fb69c3eee67a"} Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.231296 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a44e8e1ac8f53f83d67b3e5a7cf56a3dea3ec03e7acb2514a90fb69c3eee67a" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.256098 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.339080 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities\") pod \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.339158 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content\") pod \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.339256 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2lzg\" (UniqueName: \"kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg\") pod \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\" (UID: \"d2caf494-8cca-40a5-ac96-8d6538f2ccc6\") " Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.340922 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities" (OuterVolumeSpecName: "utilities") pod "d2caf494-8cca-40a5-ac96-8d6538f2ccc6" (UID: "d2caf494-8cca-40a5-ac96-8d6538f2ccc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.361129 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg" (OuterVolumeSpecName: "kube-api-access-v2lzg") pod "d2caf494-8cca-40a5-ac96-8d6538f2ccc6" (UID: "d2caf494-8cca-40a5-ac96-8d6538f2ccc6"). InnerVolumeSpecName "kube-api-access-v2lzg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.442169 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v2lzg\" (UniqueName: \"kubernetes.io/projected/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-kube-api-access-v2lzg\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.442204 5113 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.507933 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d2caf494-8cca-40a5-ac96-8d6538f2ccc6" (UID: "d2caf494-8cca-40a5-ac96-8d6538f2ccc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 21 10:27:00 crc kubenswrapper[5113]: I0121 10:27:00.543874 5113 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d2caf494-8cca-40a5-ac96-8d6538f2ccc6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:27:01 crc kubenswrapper[5113]: I0121 10:27:01.237451 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vvhzr" Jan 21 10:27:01 crc kubenswrapper[5113]: I0121 10:27:01.269466 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:27:01 crc kubenswrapper[5113]: I0121 10:27:01.276848 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vvhzr"] Jan 21 10:27:02 crc kubenswrapper[5113]: I0121 10:27:02.867903 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" path="/var/lib/kubelet/pods/d2caf494-8cca-40a5-ac96-8d6538f2ccc6/volumes" Jan 21 10:27:04 crc kubenswrapper[5113]: I0121 10:27:04.717723 5113 scope.go:117] "RemoveContainer" containerID="d8f669a33607b943c863e6cdce700a60168e56a8688c047a777f60a9a49cdeb4" Jan 21 10:27:52 crc kubenswrapper[5113]: I0121 10:27:52.796849 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:27:52 crc kubenswrapper[5113]: I0121 10:27:52.796941 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcw7s_11da35cd-b282-4537-ac8f-b6c86b18c21f/kube-multus/0.log" Jan 21 10:27:52 crc kubenswrapper[5113]: I0121 10:27:52.804680 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:27:52 crc kubenswrapper[5113]: I0121 10:27:52.804776 5113 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.145191 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483188-7256t"] Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.146932 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="extract-utilities" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.146950 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="extract-utilities" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.146983 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="extract-content" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.146989 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="extract-content" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.147001 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.147008 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.147185 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="d2caf494-8cca-40a5-ac96-8d6538f2ccc6" containerName="registry-server" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.153058 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.155102 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-7256t"] Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.158762 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.160873 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.161075 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.328314 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pptzb\" (UniqueName: \"kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb\") pod \"auto-csr-approver-29483188-7256t\" (UID: \"1d47199c-f9a4-4803-8439-3d7b94699d74\") " pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.429693 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pptzb\" (UniqueName: \"kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb\") pod \"auto-csr-approver-29483188-7256t\" (UID: \"1d47199c-f9a4-4803-8439-3d7b94699d74\") " pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.454715 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pptzb\" (UniqueName: \"kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb\") pod \"auto-csr-approver-29483188-7256t\" (UID: \"1d47199c-f9a4-4803-8439-3d7b94699d74\") " pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.484792 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:00 crc kubenswrapper[5113]: I0121 10:28:00.926345 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483188-7256t"] Jan 21 10:28:01 crc kubenswrapper[5113]: I0121 10:28:01.867308 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-7256t" event={"ID":"1d47199c-f9a4-4803-8439-3d7b94699d74","Type":"ContainerStarted","Data":"1ef34e84465b1790ee680ecd812e47ebac4f488710b68670b48116c9a2998e94"} Jan 21 10:28:02 crc kubenswrapper[5113]: I0121 10:28:02.878422 5113 generic.go:358] "Generic (PLEG): container finished" podID="1d47199c-f9a4-4803-8439-3d7b94699d74" containerID="90ee05a27d1032dbac31668dcf9522bc7b5c4a65f96afc9e8ee3f12a250111df" exitCode=0 Jan 21 10:28:02 crc kubenswrapper[5113]: I0121 10:28:02.878537 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-7256t" event={"ID":"1d47199c-f9a4-4803-8439-3d7b94699d74","Type":"ContainerDied","Data":"90ee05a27d1032dbac31668dcf9522bc7b5c4a65f96afc9e8ee3f12a250111df"} Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.161700 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.240206 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pptzb\" (UniqueName: \"kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb\") pod \"1d47199c-f9a4-4803-8439-3d7b94699d74\" (UID: \"1d47199c-f9a4-4803-8439-3d7b94699d74\") " Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.248944 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb" (OuterVolumeSpecName: "kube-api-access-pptzb") pod "1d47199c-f9a4-4803-8439-3d7b94699d74" (UID: "1d47199c-f9a4-4803-8439-3d7b94699d74"). InnerVolumeSpecName "kube-api-access-pptzb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.342048 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pptzb\" (UniqueName: \"kubernetes.io/projected/1d47199c-f9a4-4803-8439-3d7b94699d74-kube-api-access-pptzb\") on node \"crc\" DevicePath \"\"" Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.911865 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483188-7256t" Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.912194 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483188-7256t" event={"ID":"1d47199c-f9a4-4803-8439-3d7b94699d74","Type":"ContainerDied","Data":"1ef34e84465b1790ee680ecd812e47ebac4f488710b68670b48116c9a2998e94"} Jan 21 10:28:04 crc kubenswrapper[5113]: I0121 10:28:04.912288 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ef34e84465b1790ee680ecd812e47ebac4f488710b68670b48116c9a2998e94" Jan 21 10:28:05 crc kubenswrapper[5113]: I0121 10:28:05.238624 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-wf82f"] Jan 21 10:28:05 crc kubenswrapper[5113]: I0121 10:28:05.244289 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483182-wf82f"] Jan 21 10:28:06 crc kubenswrapper[5113]: I0121 10:28:06.855974 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f765f5-93ee-4740-b8d7-b66bcf1eeb50" path="/var/lib/kubelet/pods/23f765f5-93ee-4740-b8d7-b66bcf1eeb50/volumes" Jan 21 10:28:58 crc kubenswrapper[5113]: I0121 10:28:58.340804 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:28:58 crc kubenswrapper[5113]: I0121 10:28:58.341821 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:29:04 crc kubenswrapper[5113]: I0121 10:29:04.924883 5113 scope.go:117] "RemoveContainer" containerID="ca2dad778989735125b13beeddf57b7e12d113aa3d58269f56904a2ecb4a4b04" Jan 21 10:29:28 crc kubenswrapper[5113]: I0121 10:29:28.340058 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:29:28 crc kubenswrapper[5113]: I0121 10:29:28.340814 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:29:58 crc kubenswrapper[5113]: I0121 10:29:58.340496 5113 patch_prober.go:28] interesting pod/machine-config-daemon-7dhnt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:29:58 crc kubenswrapper[5113]: I0121 10:29:58.341321 5113 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:29:58 crc kubenswrapper[5113]: I0121 10:29:58.341395 5113 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" Jan 21 10:29:58 crc kubenswrapper[5113]: I0121 10:29:58.342366 5113 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9"} pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:29:58 crc kubenswrapper[5113]: I0121 10:29:58.342462 5113 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerName="machine-config-daemon" containerID="cri-o://8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" gracePeriod=600 Jan 21 10:29:58 crc kubenswrapper[5113]: E0121 10:29:58.499125 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:29:59 crc kubenswrapper[5113]: I0121 10:29:59.386511 5113 generic.go:358] "Generic (PLEG): container finished" podID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" exitCode=0 Jan 21 10:29:59 crc kubenswrapper[5113]: I0121 10:29:59.386564 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" event={"ID":"46461c0d-1a9e-4b91-bf59-e8a11ee34bdd","Type":"ContainerDied","Data":"8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9"} Jan 21 10:29:59 crc kubenswrapper[5113]: I0121 10:29:59.387664 5113 scope.go:117] "RemoveContainer" containerID="4334de7c5411ae353dbf627284825a2ca3d89ad65ee471218878e9efbc9f2ae5" Jan 21 10:29:59 crc kubenswrapper[5113]: I0121 10:29:59.392961 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:29:59 crc kubenswrapper[5113]: E0121 10:29:59.393648 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.143217 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29483190-dpb98"] Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.144386 5113 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1d47199c-f9a4-4803-8439-3d7b94699d74" containerName="oc" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.144426 5113 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d47199c-f9a4-4803-8439-3d7b94699d74" containerName="oc" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.144590 5113 memory_manager.go:356] "RemoveStaleState removing state" podUID="1d47199c-f9a4-4803-8439-3d7b94699d74" containerName="oc" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.256264 5113 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8"] Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.256544 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.260151 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.260813 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-mmrr5\"" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.260870 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.312480 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8"] Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.312544 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-dpb98"] Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.312831 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.318620 5113 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.319783 5113 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.366461 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7hb\" (UniqueName: \"kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb\") pod \"auto-csr-approver-29483190-dpb98\" (UID: \"41d6f6ab-5e16-4174-aafd-87b841182581\") " pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.468358 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.469778 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.470111 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8f7hb\" (UniqueName: \"kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb\") pod \"auto-csr-approver-29483190-dpb98\" (UID: \"41d6f6ab-5e16-4174-aafd-87b841182581\") " pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.470392 5113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpq62\" (UniqueName: \"kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.493493 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f7hb\" (UniqueName: \"kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb\") pod \"auto-csr-approver-29483190-dpb98\" (UID: \"41d6f6ab-5e16-4174-aafd-87b841182581\") " pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.572049 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.572140 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.572262 5113 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qpq62\" (UniqueName: \"kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.574695 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.579399 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.587177 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.592179 5113 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpq62\" (UniqueName: \"kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62\") pod \"collect-profiles-29483190-5fwx8\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.664778 5113 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.859477 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29483190-dpb98"] Jan 21 10:30:00 crc kubenswrapper[5113]: I0121 10:30:00.887651 5113 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8"] Jan 21 10:30:00 crc kubenswrapper[5113]: W0121 10:30:00.896346 5113 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod142df144_c7c0_4bdd_9e54_90ababbe6776.slice/crio-4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70 WatchSource:0}: Error finding container 4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70: Status 404 returned error can't find the container with id 4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70 Jan 21 10:30:01 crc kubenswrapper[5113]: I0121 10:30:01.410046 5113 generic.go:358] "Generic (PLEG): container finished" podID="142df144-c7c0-4bdd-9e54-90ababbe6776" containerID="de379a272d931f4dfbe35de04889806ac4fc4756f7a40594e873b9a66a386435" exitCode=0 Jan 21 10:30:01 crc kubenswrapper[5113]: I0121 10:30:01.410177 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" event={"ID":"142df144-c7c0-4bdd-9e54-90ababbe6776","Type":"ContainerDied","Data":"de379a272d931f4dfbe35de04889806ac4fc4756f7a40594e873b9a66a386435"} Jan 21 10:30:01 crc kubenswrapper[5113]: I0121 10:30:01.410556 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" event={"ID":"142df144-c7c0-4bdd-9e54-90ababbe6776","Type":"ContainerStarted","Data":"4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70"} Jan 21 10:30:01 crc kubenswrapper[5113]: I0121 10:30:01.411854 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-dpb98" event={"ID":"41d6f6ab-5e16-4174-aafd-87b841182581","Type":"ContainerStarted","Data":"f0e060ab4cdc0c4be254824e7f82c93463c07d82a6a26ec7845669e6eff0944f"} Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.645275 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.812838 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume\") pod \"142df144-c7c0-4bdd-9e54-90ababbe6776\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.812944 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume\") pod \"142df144-c7c0-4bdd-9e54-90ababbe6776\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.813054 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpq62\" (UniqueName: \"kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62\") pod \"142df144-c7c0-4bdd-9e54-90ababbe6776\" (UID: \"142df144-c7c0-4bdd-9e54-90ababbe6776\") " Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.814413 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume" (OuterVolumeSpecName: "config-volume") pod "142df144-c7c0-4bdd-9e54-90ababbe6776" (UID: "142df144-c7c0-4bdd-9e54-90ababbe6776"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.821954 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62" (OuterVolumeSpecName: "kube-api-access-qpq62") pod "142df144-c7c0-4bdd-9e54-90ababbe6776" (UID: "142df144-c7c0-4bdd-9e54-90ababbe6776"). InnerVolumeSpecName "kube-api-access-qpq62". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.840997 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "142df144-c7c0-4bdd-9e54-90ababbe6776" (UID: "142df144-c7c0-4bdd-9e54-90ababbe6776"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.915277 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qpq62\" (UniqueName: \"kubernetes.io/projected/142df144-c7c0-4bdd-9e54-90ababbe6776-kube-api-access-qpq62\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.915350 5113 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/142df144-c7c0-4bdd-9e54-90ababbe6776-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:02 crc kubenswrapper[5113]: I0121 10:30:02.915363 5113 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/142df144-c7c0-4bdd-9e54-90ababbe6776-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:03 crc kubenswrapper[5113]: I0121 10:30:03.430814 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" event={"ID":"142df144-c7c0-4bdd-9e54-90ababbe6776","Type":"ContainerDied","Data":"4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70"} Jan 21 10:30:03 crc kubenswrapper[5113]: I0121 10:30:03.431154 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d18924985bb04ae03dfb61151fb131eb96d065a4eb6820a7bd48c327f15ff70" Jan 21 10:30:03 crc kubenswrapper[5113]: I0121 10:30:03.430839 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5fwx8" Jan 21 10:30:03 crc kubenswrapper[5113]: I0121 10:30:03.734713 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns"] Jan 21 10:30:03 crc kubenswrapper[5113]: I0121 10:30:03.741463 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483145-mhvns"] Jan 21 10:30:04 crc kubenswrapper[5113]: I0121 10:30:04.444935 5113 generic.go:358] "Generic (PLEG): container finished" podID="41d6f6ab-5e16-4174-aafd-87b841182581" containerID="05ffaae77236f8282d67bcd1bfd56609cf9bf52afaff1b0dcbeb2e6c9e932642" exitCode=0 Jan 21 10:30:04 crc kubenswrapper[5113]: I0121 10:30:04.445021 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-dpb98" event={"ID":"41d6f6ab-5e16-4174-aafd-87b841182581","Type":"ContainerDied","Data":"05ffaae77236f8282d67bcd1bfd56609cf9bf52afaff1b0dcbeb2e6c9e932642"} Jan 21 10:30:04 crc kubenswrapper[5113]: I0121 10:30:04.859852 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3729a63-f276-462b-a69f-3ed67c756dce" path="/var/lib/kubelet/pods/b3729a63-f276-462b-a69f-3ed67c756dce/volumes" Jan 21 10:30:05 crc kubenswrapper[5113]: I0121 10:30:05.081876 5113 scope.go:117] "RemoveContainer" containerID="3661d8362c116f31912869a0e3281bfa1ac63faa30dc35ed5bc1e15bdc45f2c0" Jan 21 10:30:05 crc kubenswrapper[5113]: I0121 10:30:05.764615 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:05 crc kubenswrapper[5113]: I0121 10:30:05.864551 5113 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f7hb\" (UniqueName: \"kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb\") pod \"41d6f6ab-5e16-4174-aafd-87b841182581\" (UID: \"41d6f6ab-5e16-4174-aafd-87b841182581\") " Jan 21 10:30:05 crc kubenswrapper[5113]: I0121 10:30:05.877097 5113 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb" (OuterVolumeSpecName: "kube-api-access-8f7hb") pod "41d6f6ab-5e16-4174-aafd-87b841182581" (UID: "41d6f6ab-5e16-4174-aafd-87b841182581"). InnerVolumeSpecName "kube-api-access-8f7hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 21 10:30:05 crc kubenswrapper[5113]: I0121 10:30:05.970573 5113 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8f7hb\" (UniqueName: \"kubernetes.io/projected/41d6f6ab-5e16-4174-aafd-87b841182581-kube-api-access-8f7hb\") on node \"crc\" DevicePath \"\"" Jan 21 10:30:06 crc kubenswrapper[5113]: I0121 10:30:06.476772 5113 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29483190-dpb98" event={"ID":"41d6f6ab-5e16-4174-aafd-87b841182581","Type":"ContainerDied","Data":"f0e060ab4cdc0c4be254824e7f82c93463c07d82a6a26ec7845669e6eff0944f"} Jan 21 10:30:06 crc kubenswrapper[5113]: I0121 10:30:06.477907 5113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e060ab4cdc0c4be254824e7f82c93463c07d82a6a26ec7845669e6eff0944f" Jan 21 10:30:06 crc kubenswrapper[5113]: I0121 10:30:06.476990 5113 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29483190-dpb98" Jan 21 10:30:06 crc kubenswrapper[5113]: I0121 10:30:06.857943 5113 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-kpnqk"] Jan 21 10:30:06 crc kubenswrapper[5113]: I0121 10:30:06.860798 5113 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29483184-kpnqk"] Jan 21 10:30:08 crc kubenswrapper[5113]: I0121 10:30:08.860408 5113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce9f39-0eaa-4641-a904-ddf81004f047" path="/var/lib/kubelet/pods/d3ce9f39-0eaa-4641-a904-ddf81004f047/volumes" Jan 21 10:30:10 crc kubenswrapper[5113]: I0121 10:30:10.849531 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:30:10 crc kubenswrapper[5113]: E0121 10:30:10.850361 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:30:25 crc kubenswrapper[5113]: I0121 10:30:25.844804 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:30:25 crc kubenswrapper[5113]: E0121 10:30:25.846381 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:30:38 crc kubenswrapper[5113]: I0121 10:30:38.854510 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:30:38 crc kubenswrapper[5113]: E0121 10:30:38.855649 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:30:49 crc kubenswrapper[5113]: I0121 10:30:49.844085 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:30:49 crc kubenswrapper[5113]: E0121 10:30:49.844931 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:31:00 crc kubenswrapper[5113]: I0121 10:31:00.865264 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:31:00 crc kubenswrapper[5113]: E0121 10:31:00.867256 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:31:05 crc kubenswrapper[5113]: I0121 10:31:05.144392 5113 scope.go:117] "RemoveContainer" containerID="adc8ea5491c5e6f234ac25df2b2155475ad5c8e4d03ebc8eca99fe65977a9cf2" Jan 21 10:31:13 crc kubenswrapper[5113]: I0121 10:31:13.843549 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:31:13 crc kubenswrapper[5113]: E0121 10:31:13.844572 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" Jan 21 10:31:27 crc kubenswrapper[5113]: I0121 10:31:27.843587 5113 scope.go:117] "RemoveContainer" containerID="8fdeaa6e13b107c64fbf0d9b4123ab59c5d58eb7339d450540a2c8ed57fac5f9" Jan 21 10:31:27 crc kubenswrapper[5113]: E0121 10:31:27.845815 5113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-7dhnt_openshift-machine-config-operator(46461c0d-1a9e-4b91-bf59-e8a11ee34bdd)\"" pod="openshift-machine-config-operator/machine-config-daemon-7dhnt" podUID="46461c0d-1a9e-4b91-bf59-e8a11ee34bdd" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134125412024443 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134125413017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134114312016500 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134114312015450 5ustar corecore